Title PageCopyright Page
Dedication
Preface
Guide to the Reader
Chapter 1 - Multivariable Calculus
1.1 Vectors
1.2 Functions of multiple variables
1.3 Multiple integrals
1.4 Partial derivatives
1.5 Gradients
Chapter 2 - Parameterizations
2.1 Parameterized curves in
2.2 Cylindrical and spherical coordinates
2.3 Parameterized surfaces in
2.4 Parameterized curves in
2.5 Parameterized regions in and
Chapter 3 - Introduction to Forms
3.1 So what is a differential form?
3.2 Generalizing the integral
3.3 Interlude: a review of single variable integration
3.4 What went wrong?
3.5 What about surfaces?
Chapter 4 - Forms
4.1 Coordinates for vectors
4.21 -forms
4.3 Multiplying 1-forms
4.4 2-forms on T _(optional)
4.5 2-forms and 3-forms on T (optional)
4.6 n-forms
4.7 Algebraic computation of products
Chapter 5 - Differential Forms
5.1 Families of forms
5.2 Integrating differential 2-forms
5.3 Orientations
5.4 Integrating 1 -forms on RR
5.5 Integrating n-forms on R
5.6 The change of variables formula
5.7 Summary: How to integrate a differential form
Chapter 6 - Differentiation of Forms
6.1 The derivative of a differential 1-form
6.2 Derivatives of nn-forms
6.3 Interlude: 0 -forms
6.4 Algebraic computation of derivatives
6.5 Antiderivatives
Chapter 7 -Stokes' Theorem
7.1 Cells and chains
7.2 The generalized Stokes' Theorem
7.3 Vector calculus and the many faces of the generalized Stokes' Theorem
Chapter 8 - Applications
8.1 Maxwell's equations
8.2 Foliations and contact structures
8.3 How not to visualize a differential 1-form
Chapter 9 - Manifolds
9.1 Pull-backs
9.2 Forms on subsets of RR
9.3 Forms on parameterized subsets
9.4 Forms on quotients of R (optional).
9.5 Defining manifolds
9.6 Differential forms on manifolds
9.7 Application: DeRham cohomology.
A - Non-linear forms
A. 1 Surface area
A. 2 Arc length
References
Index
Solutions
David Bachman
A Geometric Approach to Differential Forms
BirkhäuserBoston • Basel • Berlin
David Bachman
Pitzer College
Department of Mathematics
Claremont, CA 91711
U.S.A.
The use in this publication of trade names, trademarks, service marks and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights.
Printed in the United States of America. (TXQ/MP)
987654321 www.birkhauser.com
To Sebastian and Simon
Preface
The present work is not meant to contain any new material about differential forms. There are many good books out there which give complete treatments of the subject. Rather, the goal here is to make the topic of differential forms accessible to the sophomore level undergraduate, while still providing material that will be of interest to more advanced students.
There are three tracks through this text. The first is a course in Multivariable Calculus, suitable for the third semester in a standard calculus sequence. The second track is a sophomore level Vector Calculus class. The last track is for advanced undergraduates, or even beginning graduate students. At many institutions, a course in linear algebra is not a prerequisite for either multivariable calculus or vector calculus. Consequently, this book has been written so that the earlier chapters do not require many concepts from linear algebra. What little is needed is covered in the first section.
The book begins with basic concepts from multivariable calculus such as partial derivatives, gradients and multiple integrals. All of these topics are introduced in an informal, pictorial way to quickly get students to the point where they can do basic calculations and understand what they mean. The second chapter focuses on parameterizations of curves, surfaces and three-dimensional regions. We spend considerable time here developing tools which will help students find parameterizations on their own, as this is a common stumbling block.
Chapter 3 is purely motivational. It is included to help students understand why differential forms arise naturally when integrating over parameterized domains.
The heart of this text is Chapters 4 through 7. In these chapters, the entire machinery of differential forms is developed from a geometric standpoint. New ideas are always introduced with a picture. Verbal descriptions of geometric actions are set out in boxes.
Chapter 7 focuses on the development of the generalized Stokes' Theorem. This is really the centerpiece of the text. Everything that precedes it is there for the sole purpose of its development. Everything that follows is an application. The equation is simple:
int_(del C)omega=int_(C)d omega.\int_{\partial C} \omega=\int_{C} d \omega .
Yet it implies, for example, all integral theorems of classical vector analysis. Its simplicity is precisely why it is easier for students to understand and remember than these classical results.
Chapter 7 concludes with a discussion on how to recover all of vector calculus from the generalized Stokes' Theorem. By the time students get through this they tend to be more proficient at vector integration than after traditional classes in vector calculus. Perhaps this will allay some of the concerns many will have in adopting this textbook for traditional classes.
Chapter 8 contains further applications of differential forms. These include Maxwell's equations and an introduction to the theory of foliations and contact structures. This material should be accessible to anyone who has worked through Chapter 7.
Chapter 9 is intended for advanced undergraduate and beginning graduate students. It focuses on generalizing the theory of differential forms to the setting of abstract manifolds. The final section contains a brief introduction to DeRham cohomology.
We now describe the three primary tracks through this text.
Track 1. Multivariable Calculus (Calculus III). For such a course, one should focus on the definitions of nn-forms on mm, where nn and mm are at most 3. The following Chapters/Sections are suggested:
Chapter 1, perhaps supplementing Section 1.5 with additional material on max/min problems,
Chapter 2,
Chapter 4, excluding Sections 4.4 and 4.5 due to time constraints,
Chapters 5-7,
Appendix A.
Track 2. Vector Calculus. In this course, one should mention that for nn-forms on R^(m)\mathbb{R}^{m} the numbers nn and mm could be anything, although in practice it is difficult to work examples when either is bigger than 4. The following Chapters/Sections are suggested:
Section 1.1 (unless Linear Algebra is a prerequisite),
Chapter 2,
Chapter 3 (one lecture),
Chapters 4-7,
Chapter 8, as time permits.
Track 3. Upper Division Course. Students should have had linear algebra, and perhaps even basic courses in group theory and topology.
Chapter 3 (perhaps as a reading assignment),
Chapters 4-7 (relatively quickly),
Chapters 8 and 9.
The original motivation for this book came from [GP74], the text I learned differential topology from as a graduate student. In that text, differential forms are defined in a highly algebraic manner, which left me craving something more intuitive. In searching for a more geometric interpretation, I came across Chapter 7 of Arnold's text on classical mechanics [Arn97], where there is a wonderful introduction to differential forms given from a geometric viewpoint. In some sense, the present work is an expansion of the presentation given there. Hubbard and Hubbard's text [HHO1] was also a helpful reference during the preparation of this manuscript.
The writing of this book began with a set of lecture notes from an introductory course on differential forms, given at Portland State University, during the summer of 2000. The notes were then revised for subsequent courses on multivariable calculus and vector calculus at California Polytechnic State University, San Luis Obispo and Pitzer College.
I thank several people. First and foremost, I am grateful to all those students who survived the earlier versions of this book. I would also like to thank several of my colleagues for giving me helpful comments. Most notably, Don Hartig, Matthew White and Jim Hoste had several comments after using earlier versions of this text for vector or multivariable calculus courses. John Etnyre and Danny Calegari gave me feedback regarding Chapter 8 and Saul Schleimer suggested Example 27. Other helpful suggestions were provided by Ryan Derby-Talbot. Alvin Bachman suggested some of the formatting of the text. Finally, the idea to write this text came from conversations with Robert Ghrist while I was a graduate student at the University of Texas at Austin.
Claremont, CA
March, 2006
David Bachman
Guide to the Reader
It often seems like there are two types of students of mathematics: those who prefer to learn by studying equations and following derivations, and those who prefer pictures. If you are of the former type, this book is not for you. However, it is the opinion of the author that the topic of differential forms is inherently geometric, and thus should be learned in a visual way. Of course, learning mathematics in this way has serious limitations: how can one visualize a 23 -dimensional manifold? We take the approach that such ideas can usually be built up by analogy to simpler cases. So the first task of the student should be to really understand the simplest case, which CAN often be visualized.
Fig. 0.1. The faces of the nn-dimensional cube come from connecting the faces of two copies of an ( n-1n-1 )-dimensional cube.
For example, suppose one wants to understand the combinatorics of the nn-dimensional cube. We can visualize a 1-D cube (i.e., an interval), and see just from our mental picture that it has two boundary points. Next, we can visualize a 2-D cube (a square), and see from our picture that this has four intervals on its boundary. Furthermore, we see that we can construct this 2-D cube by taking two parallel copies of our original 1-D cube and connecting the endpoints. Since there are two endpoints, we get two new intervals, in addition to the two we started with (see Fig. 0.1). Now, to construct a 3-D cube, we place two squares parallel to each other, and connect up their edges. Each time we connect an edge of one square to an edge of the other, we get a new square on the boundary of the 3-D cube. Hence, since there were four edges on the boundary of each square, we get four new squares, in addition to the two we started with, making six in all. Now, if the student understands this, then it should not be hard to convince him/her that every time we go up a dimension, the number of lower-
dimensional cubes on the boundary is the same as in the previous dimension, plus two. Finally, from this we can conclude that there are 2n(n-1)2 n(n-1)-dimensional cubes on the boundary of the nn dimensional cube.
Note the strategy in the above example: we understand the "small" cases visually, and use them to generalize to the cases we cannot visualize. This will be our approach in studying differential forms.
Perhaps this goes against some trends in mathematics in the last several hundred years. After all, there were times when people took geometric intuition as proof, and later found that their intuition was wrong. This gave rise to the formalists, who accepted nothing as proof that was not a sequence of formally manipulated logical statements. We do not scoff at this point of view. We make no claim that the above derivation for the number of ( n-1n-1 )-dimensional cubes on the boundary of an nn-dimensional cube is actually a proof. It is only a convincing argument, that gives enough insight to actually produce a proof. Formally, a proof would still need to be given. Unfortunately, all too often the classical math book begins the subject with the proof, which hides all of the geometric intuition that the above argument leads to.
1
Multivariable Calculus
1.1 Vectors
A vector is a lot like a point in space. The primary difference is that we do not usually think about doing algebra with points, while algebra with vectors is common.
When one switches from talking about points like (1,2)(1,2) to vectors like (:1,2:)\langle 1,2\rangle, both the language and notation change. We will be very consistent in this text about using parentheses to denote points and brackets to denote vectors. When discussing the point (1,2)(1,2) we say the numbers 1 and 2 are its coordinates. If we are discussing the vector (:1,2:)\langle 1,2\rangle then 1 and 2 are its components.
One often visualizes a vector (:a,b:)\langle a, b\rangle as an arrow from the point ( 0, 0 ) to the point ( a,ba, b ). This has some pleasant features. First, it immediately follows from the Pythagorean Theorem that the length of the arrow representing the vector (:a,b:)\langle a, b\rangle is
|(:a,b:)|=sqrt(a^(2)+b^(2)).|\langle a, b\rangle|=\sqrt{a^{2}+b^{2}} .
We add vectors just as one would ho e:
(:a,b:)+(:c,d:)=(:a+c,b+d:).\langle a, b\rangle+\langle c, d\rangle=\langle a+c, b+d\rangle .
Geometrically, adding a vector V_(1)V_{1} to a vector V_(2)V_{2} is equivalent to sliding V_(2)V_{2} along V_(1)V_{1} until its "tail" is at the "tip" of V_(1)V_{1}. The vector which represents the sum V_(1)+V_(2)V_{1}+V_{2} is then the one which connects the tail of V_(1)V_{1} to the tip of V_(2)V_{2}. See Figure 1.1.
Multiplication is a bit trickier. The most basic kind of multiplication involves a number and a vector, as follows:
c(:a,b:)=(:ca,cb:).c\langle a, b\rangle=\langle c a, c b\rangle .
1.1. Use similar triangles to show that c(:a,b:)c\langle a, b\rangle is a vector that points in the same direction as (:a,b:)\langle a, b\rangle, but has a length that is cc times as large.
Fig. 1.1. Adding vectors.
1.2. Find a vector that points in the same direction as (:3,4:)\langle 3,4\rangle, but has length one. (Such a vector is called a unit vector.)
To define the product of two vectors the simplest thing to do is to define the product as follows:
(:a,b:)(:c,d:)=(:ac,bd:).\langle a, b\rangle\langle c, d\rangle=\langle a c, b d\rangle .
There is nothing wrong with this, but it does not turn out to be terribly useful. Perhaps the reason is that this definition does not lend itself to a good geometric interpretation.
A more useful way to multiply vectors is called the dot product. The trick with the dot product is to define the product of two vectors to be the number
(:a,b:)*(:c,d:)=ac+bd.\langle a, b\rangle \cdot\langle c, d\rangle=a c+b d .
Fig. 1.2. The dot product of V_(1)V_{1} and V_(2)V_{2} is LL times the length of V_(1)V_{1}.
There are two noteworthy things that immediately follow from this definition. First, notice that if V_(1)=(:a,b:)V_{1}=\langle a, b\rangle, then V_(1)*V_(1)=a^(2)+b^(2)=V_{1} \cdot V_{1}=a^{2}+b^{2}=|V_(1)|^(2)\left|V_{1}\right|^{2}. Second, notice that the slope of the line containing V_(1)V_{1} is b//a\mathrm{b} / \mathrm{a}. If V_(2)=(:c,a:)V_{2}=\langle c, a\rangle is perpendicular to V_(1)V_{1} then d//c=-a//b\mathrm{d} / \mathrm{c}=-\mathrm{a} / \mathrm{b}. Cross-multiplying then gives bd=-acb d=-a c, and hence, ac+bd=0a c+b d=0. We conclude the dot product of perpendicular vectors is zero.
Both of these facts also follow from the geometric interpretation of the dot product shown in Figure 1.2. In this figure, we see that V_(1)V_{1}. V_(2)V_{2} is the length of the projection of V_(2)V_{2} onto V_(1)V_{1}, times the length of V_(1)V_{1}. Letting theta\theta be the angle between these two vectors leads to an alternate way to compute dot products:
To see this, note that the length LL of the projection of V_(2)V_{2} onto V_(1)V_{1} is given by |V_(2)|cos theta\left|V_{2}\right| \cos \theta.
1.3. Suppose V_(1)=(:a,b:),V_(2)=(:c,a:)V_{1}=\langle a, b\rangle, V_{2}=\langle c, a\rangle and theta\theta is the angle between them. Show that
1.4. Use the dot product to compute the cosine of the angle between the vectors (:1,2:)\langle 1,2\rangle and (:4,2:)\langle 4,2\rangle.
Another geometric quantity that we will need is the area of the parallelogram spanned by two vectors.
1.5. Suppose V_(1)=(:a,b:)V_{1}=\langle a, b\rangle and V_(2)=(:c,a:)V_{2}=\langle c, a\rangle. Show that the area of the parallelogram spanned by these vectors is |ad-bc||a d-b c|.
One common way to denote a set of vectors is by writing a matrix where the vectors appear as columns (or rows). The determinant of such a matrix is then defined to be the (signed) area of the parallelogram spanned by its column vectors. So, from the last exercise we have:
|[a,c],[b,d]|=ad-bc.\left|\begin{array}{ll}
a & c \\
b & d
\end{array}\right|=a d-b c .
Notice that this answer may be negative. This is because the determinant not only tells us area, but also something about the order of the vectors (:a,b:)\langle a, b\rangle and (:c,a:)\langle c, a\rangle.
Everything we have discussed above generalizes to higher dimensions. For example, if V_(1)=(:a,b,c:)V_{1}=\langle a, b, c\rangle, then the length of V_(1)V_{1} is given by
The same geometric interpretation of addition (sliding V_(2)V_{2} until its tail ends up at the tip of V_(1)V_{1} ) holds in higher dimensions as well. The dot product also works as expected in higher dimensions:
V_(1)*V_(2)=ad+be+cfV_{1} \cdot V_{2}=a d+b e+c f
and its geometric interpretation as the projected length of V_(2)V_{2}, times the length of V_(1)V_{1}, holds.
It is a bit harder to show, but if V_(3)=(:g,h,1:)V_{3}=\langle g, h, 1\rangle, then the volume of the parallelpiped spanned by V_(1),V_(2)V_{1}, V_{2} and V_(3)V_{3} is given by the absolute value of:
|[a,d,g],[b,e,h],[c,f,i]|=(aei+dhc+gbf)-(ahf+dbi+gec)\left|\begin{array}{lll}
a & d & g \\
b & e & h \\
c & f & i
\end{array}\right|=(a e i+d h c+g b f)-(a h f+d b i+g e c)
This is the formula for the determinant of a 3xx33 \times 3 matrix.
1.6. Find the volume of the parallelpiped spanned by the vectors (:1\langle 1, 0,1:),(:1,2,3:)0,1\rangle,\langle 1,2,3\rangle and (:2,5,3:)\langle 2,5,3\rangle.
1.2 Functions of multiple variables
We denote by nn the set of points with nn-coordinates. If nn is between 1 and 3 these spaces are very familiar. For example, ^(1){ }^{1} is just the number line whose depiction hangs above every elementary school blackboard. The space ^(2){ }^{2} is just the xyx y-plane that we employ so often in precalculus and calculus. And is, of course, familiar as the three-dimensional space that we feel like we experience every day (mathematicians and physicists debate whether or not this is really the space we live in).
The space R^(4)\mathbb{R}^{4} is probably less familiar. One can think of the extra coordinate as time, or color, or anything else that gives more information. At some point we must just give up on visualization. There is no way to picture $^(20)\$^{20}. This does not mean it is useless. To model the stock market, for example, one may want to represent its state at a particular point in time as a point with as many coordinates as there are stocks.
Fortunately for us, if you really understand differential forms in dimensions up to three, then very little needs to be addressed to generalize to higher dimensions.
In this text we will often represent functions abstractly by saying how many numbers go into the function, and how many come out. So, if we write f:n rarr^(m)f: n \rightarrow{ }^{m}, we mean ff is a function whose input is a point with nn coordinates and whose output is a point with mm coordinates.
Some cases of this are familiar. For example, if y=f(x)y=f(x) is a typical function from Calculus I, then f:^(1)rarr1f:{ }^{1} \rightarrow 1.
In this chapter, we focus on functions of the form f:R^(2)f: \mathbb{R}^{2}. These functions look something like z=f(x,y)z=f(x, y). To graph such a function we draw xx-, yy-, and zz-axes in ^(3){ }^{3} and plot all the points where the equation z=f(x,y)z=f(x, y) is true.
Computers can really help one visualize such graphs. It is worthwhile to play with any software package that will graph such functions. But it is equally worthwhile to learn a few techniques to sketch such graphs by hand.
The easiest way to begin to get a feel for a graph is by drawing its intersection with the coordinate planes. To sketch the intersection with the xzx z-plane, for example, set yy equal to zero and graph the resulting function. Similarly, to sketch the intersection with the yzy z plane, set xx equal to zero.
A similar approach involves sketching level curves. These are just the intersections of horizontal planes of the form z=nz=n with the graph. To sketch such a curve, one simply plots the graph of n=fn=f ( x,yx, y ).
Putting all of this information together on one set of axes can be a challenge (see Figure 1.3). Some artistic ability and some ability to visualize three-dimensional shapes is helpful, but nothing substitutes for lots of practice.
1.7. Sketch the graphs of
{:[" 1. "z=2x-3y.],[" 2. "z=x^(2)+y^(2).],[" 3. "z=xy" (compare with Figure 1.3). "],[" 4. "z=sqrt(x^(2)+y^(2)).],[" 5. "z=(1)/(sqrt(x^(2)+y^(2)))],[" 6. "z=sqrt(x^(2)+y^(2)+1).],[" 7. "z=sqrt(x^(2)+y^(2)-1).],[8.z=cos(x+y)],[" 9. "z=cos(xy)],[" 10. "z=cos(x^(2)+y^(2)).],[" 11. "z=e^(-(x^(2)+y^(2))).]:}\begin{aligned}
\text { 1. } z & =2 x-3 y . \\
\text { 2. } z & =x^{2}+y^{2} . \\
\text { 3. } z & =x y \text { (compare with Figure 1.3). } \\
\text { 4. } z & =\sqrt{x^{2}+y^{2}} . \\
\text { 5. } z & =\frac{1}{\sqrt{x^{2}+y^{2}}} \\
\text { 6. } z & =\sqrt{x^{2}+y^{2}+1} . \\
\text { 7. } z & =\sqrt{x^{2}+y^{2}-1} . \\
8 . z & =\cos (x+y) \\
\text { 9. } z & =\cos (x y) \\
\text { 10. } z & =\cos \left(x^{2}+y^{2}\right) . \\
\text { 11. } z & =e^{-\left(x^{2}+y^{2}\right)} .
\end{aligned}
1.8. Find functions whose graphs are
A plane through the origin at 45 : to both the xx - and yy-axes.
The top half of a sphere of radius two.
The top half of a torus centered around the zz-axis (i.e., the tube of radius one, say, centered around a circle of radius two in the xyx y-plane).
The top half of the cylinder of radius one which is centered around the line where the plane y=xy=x meets the plane z=0z=0.
You may find it helpful to check your answers to the above exercises with a computer graphing program.
Fig. 1.3. Several views of the graph of z=x^(2)-y^(2)z=x^{2}-y^{2}. The top two figures are the intersections with the xzx z - and yzy z-planes. The bottom left shows several level curves.
1.3 Multiple integrals
We now address the question of how to find the volume under the graph of a function f(x,y)f(x, y) of two variables. Recall from Calculus I that we define the integral of a function g(x)g(x) of one variable on the interval [0,a][0, a] by the following steps:
Choose a sequence of evenly spaced points {x_(i)}_(i=0" in ")^(n)[0,a]\left\{x_{i}\right\}_{i=0 \text { in }}^{n}[0, a] such that x_(0)=0x_{0}=0 and x_(n)=ax_{n}=a.
Let Delta x=x_(i+1)-x_(i)\Delta x=x_{i+1}-x_{i}.
For each i compute g(x_(i))Delta xg\left(x_{i}\right) \Delta x.
Sum over all ii.
Take the limit as nn goes to oo\infty.
The intuition is that each term in Step 3 above gives the area of a rectangle. Piecing all of the rectangles together gives an approximation for the function g(x)g(x), so the result of Step 4 is an approximation for the desired area. As nn goes to oo\infty in Step 5, this approximation gets better and better.
Similar steps define the volume under f(x,y)f(x, y). Let RR be the rectangle in the xyx y-plane with vertices at (0,0),(a,0),(0,b)(0,0),(a, 0),(0, b) and ( aa, b). We now perform the following steps:
Choose sequences of evenly spaced points {x_(i)}_(i=0" and ")^(n){y_(j)}_(j=0)^(m)\left\{x_{i}\right\}_{i=0 \text { and }}^{n}\left\{y_{j}\right\}_{j=0}^{m} such that x_(0)=y_(0)=0,x_(n)=ax_{0}=y_{0}=0, x_{n}=a and y_(m)=by_{m}=b. This gives a lattice of points of the form (x_(i),y_(j))\left(x_{i}, y_{j}\right) in RR.
Let Delta x=x_(i+1)-x_(i)\Delta x=x_{i+1}-x_{i} and Delta y=y_(j+1)-y_(j)\Delta y=y_{j+1}-y_{j}.
For each ii and jj compute f(x_(i),y_(j))Delta x Delta yf\left(x_{i}, y_{j}\right) \Delta x \Delta y.
Sum over all ii and jj.
Take the limit as nn and mm go to oo\infty.
These steps define f(x,y)dxdyf(x, y) d x d y. The intuition as to why this represents the desired volume is ss milar to that in the one variable case. In Step 3 we are computing the volume of a box whose base is a Delta x\Delta x by Delta y\Delta y rectangle, and whose height is f(x_(i),y_(j))f\left(x_{i}, y_{j}\right) (see Figure 1.4). Putting these boxes together approximates the function f(x,y)f(x, y), and this approximation gets better and better when nn and mm go to oo\infty.
It is important to understand the above definition from a theoretical point of view. Later in this text we will come back to it many times. Unfortunately, it is almost impossible to use this definition to compute any integrals. For this, we need an alternate point of view.
Instead of approximating f(x,y)f(x, y) with boxes as above, we will now approximate it by "slabs" whose profiles look like slices by planes parallel to one of the coordinate planes (see Figure 1.6). To do this we carry out the following steps:
Choose a sequence of evenly spaced points {x_(i)}_(i=osuchthat)^(n)\left\{x_{i}\right\}_{i=o s u c h ~ t h a t ~}^{n}x_(0)=0x_{0}=0 and x_(n)=ax_{n}=a.
Let Delta x=x_(i+1)-x_(i)\Delta x=x_{i+1}-x_{i}.
For each i compute
[int_(0)^(b)f(x_(i),y)dy]Delta x.\left[\int_{0}^{b} f\left(x_{i}, y\right) d y\right] \Delta x .
Sum over all ii.
Take the limit as nn goes to oo\infty. bb
Note that in Step 3 the quantityo f(x_(ij)y)dyf\left(x_{i j} y\right) d y is exactly the area under the curve that you get when you slice the graph of f(x,y)f(x, y) by
the plane parallel to the yzy z-plane at x=x_(i)x=x_{i} (see Figure 1.5). Multiplying by Delta x\Delta x then gives the volume of a slab of thickness Delta x\Delta x, with the same profile as this slice. Putting these slabs together still approximates the function f(x,y)f(x, y), and this approximation gets better and better as nn goes to oo\infty (see Figure 1.6). The result is the following:
Fig. 1.4. Using boxes to approximate a function.
int_(R)f(x,y)dxdy=int_(0)^(a)[int_(0)^(b)f(x,y)dy]dx\int_{R} f(x, y) d x d y=\int_{0}^{a}\left[\int_{0}^{b} f(x, y) d y\right] d x
Of course, we could have added up volumes of the slabs that were parallel to the xzx z-plane instead. This process would have produced the following equality:
int_(R)f(x,y)dxdy=int_(0)^(b)[int_(0)^(a)f(x,y)dx]dy\int_{R} f(x, y) d x d y=\int_{0}^{b}\left[\int_{0}^{a} f(x, y) d x\right] d y
Hence we see that Fubini's theorem must be true:
int_(0)^(a)int_(0)^(b)f(x,y)dydx=int_(0)^(b)int_(0)^(a)f(x,y)dxdy\int_{0}^{a} \int_{0}^{b} f(x, y) d y d x=\int_{0}^{b} \int_{0}^{a} f(x, y) d x d y
Fig. 1.5. The area AA of the slice through x=x_(i)x=x_{i} is given byo f(x_(i,):}f\left(x_{i,}\right. y) dyd y.
Fig. 1.6. Putting slabs together approximates the function f(x,y)f(x, y).
Note 1. Be aware that we have avoided very technical issues here such as continuity and convergence. For a rigorous treatment, see any standard text in multivariable calculus.
Example 1. To find the volume under the graph of f(x,y)=xy^(2)f(x, y)=x y^{2} and above the rectangle RR with vertices at (0,0),(2,0),(0,3)(0,0),(2,0),(0,3) and (2,3)(2,3) we compute:
{:[int_(R)xy^(2)dxdy=int_(0)^(3)int_(0)^(2)xy^(2)dxdy],[=int_(0)^(3)[(1)/(2)x^(2)y^(2)|_(x=0)^(2)]dy],[=int_(0)^(3)2y^(2)dy],[=18]:}\begin{aligned}
\int_{R} x y^{2} d x d y & =\int_{0}^{3} \int_{0}^{2} x y^{2} d x d y \\
& =\int_{0}^{3}\left[\left.\frac{1}{2} x^{2} y^{2}\right|_{x=0} ^{2}\right] d y \\
& =\int_{0}^{3} 2 y^{2} d y \\
& =18
\end{aligned}
1.9. Let RR be the rectangle in the xyx y-plane with vertices at ( 1,0 ), (2,0),(1,3)(2,0),(1,3) and (2,3)(2,3). Integrate the following functions over RR.
x^(2)y^(2)x^{2} y^{2}
1 .
x^(2)+y^(2)x^{2}+y^{2}.
sqrt(x+(2)/(3)y)\sqrt{x+\frac{2}{3} y}.
1.4 Partial derivatives
In this section, we begin to discuss tangent lines to the graph of a function of the form f:N^(2)f: \mathbb{N}^{2}. If we slice the graph of such a function with the plane parallel to the yzy z-plane, through the point ( x_(0),y_(0)x_{0}, y_{0} ), then we get a curve which represents some function of yy. We can then ask, "What is the slope of the tangent line to this curve when y=y_(0)y=y_{0} ?" The answer to this question is precisely the definition (del f)/(of del y)(x_(0),y_(0))\frac{\partial f}{o f \partial y}\left(x_{0}, y_{0}\right) (see Figure 1.7).
Example 2. Suppose f(x,y)=xy^(2)f(x, y)=x y^{2}. We wish to compute (del f)/(del y)(2,3)\frac{\partial f}{\partial y}(2,3). The slice of the graph of f(x,y)f(x, y), parallel to the yzy z-plane, through the point ( 2,3 ), is given by substituting 2 for xx. This gives us the function 2y^(2)2 y^{2}. Differentiating with respect to yy then gives 4y4 y. Plugging in 3 for yy yields 12 .
If we instead wish to compute (del f)/(del y)(4,3)\frac{\partial f}{\partial y}(4,3), we could go through the same steps. The slice through the point is the graph of 4y^(2)4 y^{2}. Differentiating with respect to yy gives 8y8 y. Evaulating at y=3y=3 yields 24.
Fig. 1.7. The partial derivative with respect to yy.
If we wish to repeat this many more times, it will be easier to leave the variable xx in, but think of it as a constant. Hence, differentiating xy^(2)x y^{2} with respect to yy gives 2xy2 x y, and we can now plug in whatever numbers we want for xx and yy to obtain a final answer immediately.
Partial derivatives with respect to xx are just as easy to compute. Geometrically, we think of this as giving the slope of a line tangent to the graph which is the slice parallel to the xzx z-plane. Algebraically, we think of yy as a constant and take the derivative with respect to xx. 1.10. Compute (del f)/(del x)\frac{\partial f}{\partial x} and (del f)/(del y)\frac{\partial f}{\partial y}.
x^(2)y^(3)x^{2} y^{3}.
sin(x^(2)y^(3))\sin \left(x^{2} y^{3}\right).
x sin(xy)x \sin (x y).
Notice that when you take a partial derivative you get another function of xx and yy. You can then do it again to find the second partials. These are denoted by: 1.11. Find all second partials for each of the functions in the previous exercise.
{:[(del^(2)f)/(dely^(2))=(del)/(del y)((del f)/(del y))],[(del^(2)f)/(del x del y)=(del)/(del x)((del f)/(del y))],[(del^(2)f)/(del y del x)=(del)/(del y)((del f)/(del x))]:}\begin{aligned}
\frac{\partial^{2} f}{\partial y^{2}} & =\frac{\partial}{\partial y}\left(\frac{\partial f}{\partial y}\right) \\
\frac{\partial^{2} f}{\partial x \partial y} & =\frac{\partial}{\partial x}\left(\frac{\partial f}{\partial y}\right) \\
\frac{\partial^{2} f}{\partial y \partial x} & =\frac{\partial}{\partial y}\left(\frac{\partial f}{\partial x}\right)
\end{aligned}
Note that amazingly, the "mixed" partials (del^(2)f)/(del x del y)\frac{\partial^{2} f}{\partial x \partial y} and (del^(2)f)/(del y del x)\frac{\partial^{2} f}{\partial y \partial x} are always equal. This is not a coincidence! Somehow the mixed partials measure the "twisting" of the graph, and this is the same from every direction.
1.5 Gradients
Let's look back to Figure 1.7. What if we sliced the graph of f(x,y)f(x, y) with some vertical plane through the point ( x_(0),y_(0)x_{0}, y_{0} ) that was not parallel to the xzx z - or yzy z-planes, as in Figure 1.8? How could we compute the slope then?
Fig. 1.8. A directional derivative. (x_(0),y_(0))\left(x_{0}, y_{0}\right)
To answer this, visualize the set of all lines tangent to the graph of f(x,y)f(x, y) at the point ( x_(0),y_(0)x_{0}, y_{0} ). The is a tangent plane.
The equation for a plane through the origin in ^(3){ }^{3} is of the form zz=m_(x)x+m_(y)y=m_{x} x+m_{y} y. Notice that the intersection of such a plane with the xz-x z-
plane is the graph of z=m_(x)xz=m_{x} x. Hence, m_(x)m_{x} is the slope of this line of intersection. Similarly, the quantity m_(y)m_{y} is the slope of the line which is the intersection with the yzy z-plane.
To get a plane through the point ( x_(0),y_(0),f(x_(0),y_(0))x_{0}, y_{0}, f\left(x_{0}, y_{0}\right) ), we can translate the origin to this point by replacing xx with x-x_(0),yx-x_{0}, y with yyy_(0)y_{0} and zz with z-f(x_(0),y_(0))z-f\left(x_{0}, y_{0}\right) :
Since we want this to actually be a tangent plane, it follows that m_(x)m_{x} must be equal to (del f)/(del x)\frac{\partial f}{\partial x} and mym y must be (del f)/(del y)\frac{\partial f}{\partial y}. Hence, the equation of the tangent plane TT is given by where (del f)/(del x)(del f)/(del y)\frac{\partial f}{\partial x} \frac{\partial f}{\partial y}, and ff are all evaluated at the point ( x_(0),y_(0)x_{0}, y_{0} ).
Now, suppose PP is the vertical plane through the point ( x_(0),y_(0)x_{0}, y_{0} ) depicted in Figure 1.9. Let /// denote the line where PP intersects the xyx y-plane. The tangent line LL to the graph of ff, which lies above II, is also the line contained in TT, which lies above II. To figure out the slope of LL we will simply compute "rise over run."
Suppose /// contains the vector V=(:a,b:)V=\langle a, b\rangle, where ∣V=1\mid V=1. Then two points on II, a distance of 1 apart, are ( x_(0),y_(0)x_{0}, y_{0} ) and ( x_(0)+a,y_(0)+bx_{0}+a, y_{0}+b ). Thus the "run" will be equal to 1 . The "rise" is the difference between T(x_(0),y_(0))T\left(x_{0}, y_{0}\right) and T(x_(0)+a,y_(0)+b)T\left(x_{0}+a, y_{0}+b\right), which we compute as follows:
Since the slope of LL is "rise" over "run," and the "run" equals 1 , we conclude the slope of LL is ^(a(del f)/(del x))+b(del f)/(del y){ }^{a \frac{\partial f}{\partial x}}+b \frac{\partial f}{\partial y}, where (del f)/(del x)\frac{\partial f}{\partial x} and (del f)/(del y)\frac{\partial f}{\partial y} are evaluated at the point ( x_(0),y_(0)x_{0}, y_{0} ).
1.12. Suppose f(x,y)=x^(2)y^(3)f(x, y)=x^{2} y^{3}. Compute the slope of the line tangent to f(x,y)f(x, y), at the point (2,1)(2,1), in the direction ((sqrt2)/(2),-(sqrt2)/(2))\left(\frac{\sqrt{2}}{2},-\frac{\sqrt{2}}{2}\right).
1.13. Let f(x,y)=xy+x-2y+4f(x, y)=x y+x-2 y+4. Find the slope of the tangent line to the graph of f(x,y)f(x, y), in the direction of (:1,2:)\langle 1,2\rangle, at the point ( 0 , 1).
Fig. 1.9. Computing the slope of the tangent line LL.
The quantity ^(a)(del f)/(del x)(x_(0),y_(0))+b(del f)/(del y)(x_(0),y_(0))_("is "){ }^{a} \frac{\partial f}{\partial x}\left(x_{0}, y_{0}\right)+b \frac{\partial f}{\partial y}\left(x_{0}, y_{0}\right)_{\text {is }} defined to be the directional derivative of ff, at the point (x_(0),y_(0))\left(x_{0}, y_{0}\right), in the direction VV. We will adopt the notation grad_(V)f(x_(0),y_(0))\nabla_{V} f\left(x_{0}, y_{0}\right) for this quantity.
Let f(x,y)=xy^(2)f(x, y)=x y^{2}. Let's compute the directional derivative of ff, at the point (2,3)(2,3), in the direction V=(:1,5:)V=\langle 1,5\rangle. We compute:
Is 69 the slope of the tangent line to some curve that we get when we intersect the graph of xy^(2)x y^{2} with some plane? What this number represents is the rate of change of ff, as we walk along the line / of Figure 1.9, with speed ∣И\mid ИИ. To find the desired slope we would have to walk with speed one. Hence, the directional derivative only
represents a slope when ∣V=1\mid V=1. Let's at least see if this agrees with what we previously found.
If we stand at the point ( x_(0),y_(0)x_{0}, y_{0} ), walk in the direction (:1,0:)\langle 1,0\rangle and ask what the rate of change of ff is, we obtain the following answer:
This certainly agrees with our interpretation of (del f)/()\frac{\partial f}{} as a slope. If we repeat this with the vector (:2,0:)\langle 2,0\rangle, then we find out how fast ff changes when we walk twice as fast in the same direction:
Using this notation we obtain the following formula:
grad_(V)f(x_(0),y_(0))=grad f(x_(0),y_(0))*V\nabla_{V} f\left(x_{0}, y_{0}\right)=\nabla f\left(x_{0}, y_{0}\right) \cdot V
Note that this dot product is greatest when VV points in the same direction as grad f\nabla f. This fact leads us to the geometric significance of the gradient vector. Think of f(x,y)f(x, y) as a function which represents the altitude in some mountain range, given a location in longitude xx
and latitude yy. Now, if all you know is ff and your location xx and yy, and you want to figure out which way "uphill" is, all you have to do is point yourself in the direction of grad f\nabla f.
What if you wanted to know what the slope was in the direction of steepest ascent? You would have to compute the directional derivative, using a vector of length one which points in the same direction as grad f\nabla f. Such a vector is easy to find: U=(grad f)/(|grad f|)U=\frac{\nabla f}{|\nabla f|}. Now we compute this slope:
{:[grad_(U)f=grad f*U],[=grad f*(grad f)/(|grad f|)],[=(1)/(|grad f|)(grad f*grad f)],[=(1)/(|grad f|)|grad f|^(2)],[=|grad f|.]:}\begin{aligned}
\nabla_{U} f & =\nabla f \cdot U \\
& =\nabla f \cdot \frac{\nabla f}{|\nabla f|} \\
& =\frac{1}{|\nabla f|}(\nabla f \cdot \nabla f) \\
& =\frac{1}{|\nabla f|}|\nabla f|^{2} \\
& =|\nabla f| .
\end{aligned}
Hence, the magnitude of the gradient vector represents the largest slope of a tangent line through a particular point.
1.14. Let f(x,y)=xy^(2)f(x, y)=x y^{2}.
Compute grad f\nabla f.
Use your answer to the previous question to compute grad(:_(1,5):)\nabla\left\langle_{1,5}\right\ranglef(2,3)f(2,3).
Find a vector of length one that points in the direction of steepest ascent, at the point (2,3)(2,3).
What is the largest slope of a tangent line to the graph of ff when (x,y)=(2,3)(x, y)=(2,3) ?
1.15. Suppose ( x_(0),y_(0)x_{0}, y_{0} ) is a point where grad f\nabla f is non-zero and let n=n=f(x_(0),y_(0))f\left(x_{0}, y_{0}\right). Show that the vector grad f(x_(0),y_(0))\nabla f\left(x_{0}, y_{0}\right) is perpendicular to the set of points (x,y)(x, y) such that f(x,y)=nf(x, y)=n (i.e., a level curve).
1.16. For each of the following functions f(x,y)f(x, y) :
Compute grad f(0,0)\nabla f(0,0).
What does this answer tell you about the slope of the lines tangent to the graph of ff at (0,0)(0,0) ?
Compute all second partials at (0,0)(0,0).
At the point (0,0)(0,0) compute
D(x,y)=|[(del^(2)f)/(delx^(2))f],[(del^(2)f)/(del y del x)(del^(2)f)/(dely^(2)y)]|=(del^(2)f)/(delx^(2))(del^(2)f)/(dely^(2))-(del^(2)f)/(del x del y)(del^(2)f)/(del y del x).D(x, y)=\left|\begin{array}{l}
\frac{\partial^{2} f}{\partial x^{2}} f \\
\frac{\partial^{2} f}{\partial y \partial x} \frac{\partial^{2} f}{\partial y^{2} y}
\end{array}\right|=\frac{\partial^{2} f}{\partial x^{2}} \frac{\partial^{2} f}{\partial y^{2}}-\frac{\partial^{2} f}{\partial x \partial y} \frac{\partial^{2} f}{\partial y \partial x} .
Describe the shape of the graph of f(x,y)f(x, y) near (0,0)(0,0).
1.17. A function f(x,y)f(x, y) is said to have a critical point at (x_(0),y_(0))\left(x_{0}, y_{0}\right) if grad\nablaf(x_(0),y_(0))=(:0,0:)f\left(x_{0}, y_{0}\right)=\langle 0,0\rangle. Based on the previous problem, hypothesize about whether the graph of z=f(x,y)z=f(x, y) has a maximum, minimum, or saddle at (x_(0),y_(0))\left(x_{0}, y_{0}\right) if f(x,y)f(x, y) has a critical point at (x_(0),y_(0))\left(x_{0}, y_{0}\right),
D(x_(0),y_(0)) > 0D\left(x_{0}, y_{0}\right)>0 and (del^(2)f)/(delx^(2)) < 0\frac{\partial^{2} f}{\partial x^{2}}<0.
D(x_(0),y_(0)) > 0D\left(x_{0}, y_{0}\right)>0 and (del^(2)f)/(delx^(2)) > 0\frac{\partial^{2} f}{\partial x^{2}}>0.
D(x_(0),y_(0)) < 0D\left(x_{0}, y_{0}\right)<0.
1.18. Find functions f(x,y)f(x, y) such that D(0,0)=0D(0,0)=0 and at (0,0)(0,0) the graph of z=f(x,y)z=f(x, y) has a
Minimum.
Maximum.
Saddle.
2
Parameterizations
2.1 Parameterized curves in 2
Given a curve CC in a parameterization for CC is a (one-to-one, onto, differentiable) function of the form varphi:^(1)rarr C\varphi:{ }^{1} \rightarrow C.
Example 3. The function varphi(t)=(cos t\varphi(t)=(\cos t, sin t)\sin t), where 0 <= t < 2pi0 \leq t<2 \pi, is a parameterization for the circle of radius 1 . Another parameterization for the same circle is Psi(t)=(cos 2t\Psi(t)=(\cos 2 t, sin 2t)\sin 2 t), where 0 <= t < pi0 \leq t<\pi. The difference between these two parameterizations is that as tt increases, the image of Psi(t)\Psi(t) moves twice as fast around the circle as the image of varphi(t)\varphi(t).
2.1. A function of the form varphi(t)=(at+c,bt+d)\varphi(t)=(a t+c, b t+d) is a parameterization of a line.
What is the slope of the line parameterized by varphi\varphi ?
How does this line compare to the one parameterized by Psi(t)\Psi(t)=(at,bt)=(a t, b t) ?
2.2. Draw the curves given by the following parameterizations:
(t,t^(2))\left(t, t^{2}\right), where 0 <= t <= 10 \leq t \leq 1.
(t^(2),t^(3))\left(t^{2}, t^{3}\right), where 0 <= t <= 10 \leq t \leq 1.
(2cos t,3sin t)(2 \cos t, 3 \sin t), where 0 <= t <= 2pi0 \leq t \leq 2 \pi.
(cos 2t,sin 3t)(\cos 2 t, \sin 3 t), where 0 <= t <= 2pi0 \leq t \leq 2 \pi.
(t cos t,t sin t)(t \cos t, t \sin t), where 0 <= t <= 2pi0 \leq t \leq 2 \pi.
Given a curve, it can be very difficult to find a parameterization. There are many ways of approaching the problem, but nothing which always works. Here are a few hints:
If CC is the graph of a function y=f(x)y=f(x), then varphi(t)=(t,f(t))\varphi(t)=(t, f(t)) is a parameterization of CC. Notice that the yy-coordinate of every point in the image of this parameterization is obtained from the xx-coordinate by applying the function ff.
If one has a polar equation for a curve like r=f(theta)r=f(\theta), then, since x=r cos thetax=r \cos \theta and y=r sin thetay=r \sin \theta, we get a parameterization of the form varphi(theta)=(f(theta)cos theta,f(theta)sin theta)\varphi(\theta)=(f(\theta) \cos \theta, f(\theta) \sin \theta).
Example 4. The top half of a circle of rad us one is the graph of y=y=sqrt()1-x^(2)\sqrt{ } 1-x^{2}. Hence a parameterization for this is (t,sqrt()1-t^(2))\left(t, \sqrt{ } 1-t^{2}\right), where -1 <= t <=-1 \leq t \leq 1. This figure is also the graph of the polar equation r=1,0 <= theta <= pir=1,0 \leq \theta \leq \pi, hence the parameterization ( cos t,sin t\cos t, \sin t ), where 0 <= t <= pi0 \leq t \leq \pi.
2.3. Sketch and find parameterizations for the curves described by:
The graph of the polar equation r=cos thetar=\cos \theta.
The graph of y=sin xy=\sin x.
2.4. Find a parameterization for the line segment which connects the point (1,1)(1,1) to the point (2,5)(2,5).
Parameterized curves may be familiar from a second semester calculus class. Often in these classes one learns how to calculate the slope of a tangent line to the curve. But usually one does not discuss the derivative of the parameterization itself. One reason is that the derivative is actually a vector. If varphi(t)=(f(t),g(t))\varphi(t)=(f(t), g(t)), then
This vector has important geometric significance. The slope of a line containing this vector when t=t_(0)t=t_{0} is the same as the slope of the line tangent to the curve at the point varphi(t_(0))\varphi\left(t_{0}\right). The magnitude (length) of this vector gives one a concept of the speed of the point varphi(t)\varphi(t) as tt is increases through t_(0)t_{0}. For convenience, one often draws the vector varphi^(')(t_(0))\varphi^{\prime}\left(t_{0}\right) based at the point varphi_((t_(0)))\varphi_{\left(t_{0}\right)} (see Figure 2.1).
2.5. Let varphi(t)=(cos t,sin t)\varphi(t)=(\cos t, \sin t) (where 0 <= t <= pi)0 \leq t \leq \pi) and Psi(t)=(t,sqrt()1-t^(2))\Psi(t)=\left(t, \sqrt{ } 1-t^{2}\right) (where -1 <= t <= 1-1 \leq t \leq 1 ) be parameterizations of the top half of the unit circle. Sketch the vectors (d phi)/(dt)\frac{d \phi}{d t} and (d psi)/(dt)\frac{d \psi}{d t} at the points ((sqrt2)/(2),(sqrt2)/(2)),(0,1)\left(\frac{\sqrt{2}}{2}, \frac{\sqrt{2}}{2}\right),(0,1) and (-(sqrt2)/(2),(sqrt2)/(2))\left(-\frac{\sqrt{2}}{2}, \frac{\sqrt{2}}{2}\right).
2.6. Let CC be the set of points in ^(2){ }^{2} that satisfies the equation x=x=y^(2)y^{2}.
Find a parameterization for CC.
Find a tangent vector to CC at the point (4,2)(4,2).
2.2 Cylindrical and spherical coordinates
There are several ways to specify the location of a point in ^(3){ }^{3}. The most common is to give the lengths of the projections onto the x-,y-x-, y- and zz-axes. These are, of course, the xx-, yy - and zz-coordinates. We often call the (x,y,z)(x, y, z) coordinate system
Fig. 2.1. The derivative of the parameterization varphi(t)=(t,t^(2))\varphi(t)=\left(t, t^{2}\right) is the vector (:1,2t:)\langle 1,2 t\rangle. When t=1t=1 this is the vector (:1,2:)\langle 1,2\rangle, which we picture based at the point varphi(1)=(1,1)\varphi(1)=(1,1).
Cartesian coordinates (after the mathematician René Descartes), or rectangular coordinates.
A second method of describing the location of a point is to use polar coordinates (r,theta)(r, \theta) to describe the projection onto the xyx y-plane, and the quantity zz to describe the height off of the xyx y-plane (see
Figure 2.2). It follows that the relationships between r,theta,xr, \theta, x and yy are the same as for polar coordinates:
[x=r cos theta|^(r)=sqrt(x^(2)+y^(2))],[y=r sin theta]|_(theta=tan^(-1)((y)/(x)).).\left.\begin{aligned}
& x=\left.r \cos \theta\right|^{r}=\sqrt{x^{2}+y^{2}} \\
& y=r \sin \theta
\end{aligned}\right|_{\theta=\tan ^{-1}\left(\frac{y}{x}\right) .} .
The ( r,theta,zr, \theta, z ) coordinates are called cylindrical co rdinates.
The third most common coordinate system is called spherical coordinates. In this system, one specifies the distance rho\rho from the origin, the same angle theta\theta from cylindrical coordinates and the angle varphi\varphi that a ray to the origin makes with the zz-axis (see Figure 2.3). A little basic trigonometry yields the relationships:
2.7. Find all of the relationships between the quantities r,thetar, \theta and zz from cylindrical coordinates and the quantities rho,theta\rho, \theta and varphi\varphi from spherical coordinates.
Fig. 2.2. Cylindrical coordinates.
Fig. 2.3. Spherical coordinates.
Each coordinate system is useful for describing different graphs, as can be seen in the following examples.
Example 5. A cylinder of radius one, centered on the zz-axis, can be described by equations in each coordinate system as follows:
Rectangular: x^(2)+y^(2)=1x^{2}+y^{2}=1
Cylindrical: r=1r=1
Spherical: rho sin varphi=1\rho \sin \varphi=1.
Example 6. A sphere of radius one is described by the equations:
2.9. Find rectangular, cylindrical and spherical equations that describe the following shapes:
A right, circular cone centered on the zz-axis, with vertex at the origin.
The xzx z-plane.
The xyx y-plane.
A plane that is at an angle of (pi)/(4)\frac{\pi}{4} with both the xx - and yy-axes.
The surface found by revolving the graph of z=x^(3)z=x^{3} (where xx>= 0\geq 0 ) around the zz-axis.
2.3 Parameterized surfaces in 3
A parameterization for a surface SS in ^(3){ }^{3} is a (one-to-one, onto, differentiable) function from some subset of R^(2)R^{2} into ^(3){ }^{3} whose image is SS.
Example 7. The function varphi(u,u)=(u,u,sqrt()1-u^(2)-u^(2))\varphi(u, u)=\left(u, u, \sqrt{ } 1-u^{2}-u^{2}\right), where (u,u)(u, u) lies inside a disk of radius one, is a parameterization for the top half of the unit sphere.
One of the best ways to parameterize a surface is to find an equation in some coordinate system which can be used to eliminate one unknown coordinate. Then translate back to rectangular coordinates.
Example 8. An equation for the top half of the phere in cylindrical coordinates is r^(2)+z^(2)=1r^{2}+z^{2}=1. Solving for zz then gives us z=sqrt()1-r^(2)z=\sqrt{ } 1-r^{2}. Translating to rectangular coordinates we have:
phi(r,theta)=(r cos theta,r sin theta,sqrt(1-r^(2)))\phi(r, \theta)=\left(r \cos \theta, r \sin \theta, \sqrt{1-r^{2}}\right)
where 0 <= r <= 10 \leq r \leq 1 and 0 <= theta <= 2pi0 \leq \theta \leq 2 \pi.
Example 9. The equation rho=varphi\rho=\varphi describes some surface in spherical coordinates. Translating to rectangular coordinates then gives us:
{:[x=rho sin rho cos theta],[y=rho sin rho sin theta],[z=rho cos rho.]:}\begin{gathered}
x=\rho \sin \rho \cos \theta \\
y=\rho \sin \rho \sin \theta \\
z=\rho \cos \rho .
\end{gathered}
Hence, a parameterization for this surface is given by
phi(rho,theta)=(rho sin rho cos theta,rho sin rho sin theta,rho cos rho).\phi(\rho, \theta)=(\rho \sin \rho \cos \theta, \rho \sin \rho \sin \theta, \rho \cos \rho) .
2.10. Find parameterizations of the surfaces described by the equations in Problem 2.8.
2.11. Find a parameterization for the graph of an equation of the form z=f(x,y)z=f(x, y).
2.12. Use the rectangular, cylindrical and spherical equations found in Problem 2.9 to parameterize the surfaces described there.
2.13. Use spherical coordinates to find a parameterization for the portion of the sphere of radius two, centered at the origin, which lies below the graph of z=rz=r and above the xyx y-plane.
2.14. Sketch the surfaces given by the following parameterizations:
psi(theta,phi)=(phi sin phi cos theta,phi sin phi sin theta,phi cos phi),0 <= phi <= (pi)/(2),0 <= theta <= 2pi\psi(\theta, \phi)=(\phi \sin \phi \cos \theta, \phi \sin \phi \sin \theta, \phi \cos \phi), 0 \leq \phi \leq \frac{\pi}{2}, 0 \leq \theta \leq 2 \pi.
phi(r,theta)=(r cos theta,r sin theta,cos r),0 <= r <= 2pi,0 <= theta <= 2pi\phi(r, \theta)=(r \cos \theta, r \sin \theta, \cos r), 0 \leq r \leq 2 \pi, 0 \leq \theta \leq 2 \pi.
Just as we could differentiate parameterizations of curves in 2, we can also differentiate parameterizations of surfaces in ^(3){ }^{3}. In general, such a parameterization for a surface SS can be written as
Thus there are two variables we can differentiate with respect to: uu and UU. Each of these gives a vector which is tangent to the parameterized surface:
The vectors (del phi)/()^((del )/(u))\frac{\partial \phi}{}{ }^{\frac{\partial}{u}} and (del phi)/(del v)\frac{\partial \phi}{\partial v} determine a plane which is tangent to the surface SS at the point varphi(u,U)\varphi(u, \mathrm{U}).
2.15. Suppose some surface is described by the parameterization
phi(u,v)=(2u,3v,u^(2)+v^(2)).\phi(u, v)=\left(2 u, 3 v, u^{2}+v^{2}\right) .
Find two (non-parallel) vectors which are tangent to this surface at the point (4,3,5)(4,3,5).
2.4 Parameterized curves in 3
We begin with an example which demonstrates a parameterization of a curve in ^(3){ }^{3}.
Example 10. The function varphi(t)=(cos t,sin t,t)\varphi(t)=(\cos t, \sin t, t) parameterizes a curve that spirals upward around a cylinder of radius one.
2.16. Describe the difference between the curves with the following parameterizations:
(cos^((1)/(t)),sin^((1)/(t)),t)\left(\cos ^{\frac{1}{t}}, \sin ^{\frac{1}{t}}, t\right).
2.17. Describe the lines given by the following parameterizations:
(t,0,0)(t, 0,0).
(0,0,t)(0,0, t).
(0,t,t)(0, t, t).
(t,t,t)(t, t, t).
In the previous section, we found parameterizations of surfaces by finding an equation for the surface (in some coordinate system), solving for a variable and then translating to rectangular coordinates. To find a parameterization of a curve in R^(3)\mathbb{R}^{3}, an effective strategy is to find some way to "eliminate" two coordinates (in some system), and then translate into rectangular coordinates. By "eliminating" a
coordinate we mean either expressing it as some constant, or expressing it as a function of the third, unknown coordinate.
Example 11. We demonstrate two ways to parameterize one of the lines that is at the intersection of the cone z^(2)=x^(2)+y^(2)z^{2}=x^{2}+y^{2} and the plane y=2xy=2 x. The coordinate yy is already expressed as a function of xx. To express zz as a function of xx, we substitute 2x2 x for yy in the first equation. This gives us z^(2)=x^(2)+(2x)^(2)=5x^(2)z^{2}=x^{2}+(2 x)^{2}=5 x^{2}, or z=5xz=5 x (the negative root would give us the other intersection line). Hence, we get the paramaterization
Another way to describe this line is with spherical coordinates. Note that for every point on the line varphi=(pi)/(4)\varphi=\frac{\pi}{4} (from the first equation) and theta\theta=tan^(-1)2=\tan ^{-1} 2 (because tan theta=y//x=2\tan \theta=\mathrm{y} / \mathrm{x}=2, from the second equation). Converting to rectangular coordinates then gives us
Note that dividing the first parameterization by sqrt()10\sqrt{ } 10 and simplifying yields the second parameterization.
2.18. Find a parameterization for the curve that is at the intersection of the plane x+y=1x+y=1 and the cone z^(2)=x^(2)+y^(2)z^{2}=x^{2}+y^{2}.
2.19. Find two parameterizations for the circle that is at the intersection of the cylinder x^(2)+y^(2)=4x^{2}+y^{2}=4 and the paraboloid z=x^(2)+z=x^{2}+y^(2)y^{2}.
2.5 Parameterized regions in 2 and
In Section 1.3, we learned how to integrate functions of multiple variables over rectangular regions. Eventually we will learn how to integrate such functions over regions of any shape. The trick will be to parameterize such regions by functions whose domain is a rectangle. Some cases of this are already familiar.
Example 12. A parameterization for the disk of radius one (that is, the set of points in ^(2){ }^{2} which are at a distance of at most one from the origin) is given using polar coordinates:
phi(r,theta)=(r cos theta,r sin theta),0 <= r <= 1,0 <= theta <= 2pi.\phi(r, \theta)=(r \cos \theta, r \sin \theta), 0 \leq r \leq 1,0 \leq \theta \leq 2 \pi .
2.20. Let BB be the ball of radius one in (i.e., the set of points satisfying x^(2)+y^(2)+z^(2) <= 1x^{2}+y^{2}+z^{2} \leq 1 ).
Use spherical coordinates to find a parameterization for BB.
Find a parameterization for the intersection of BB with the first octant.
2.21. The "solid cylinder" of height one and radius rr in ^(3){ }^{3} is the set of points inside the cylinder x^(2)+y^(2)=r^(2)x^{2}+y^{2}=r^{2}, and between the planes zz=0=0 and z=1z=1.
Use cylindrical coordinates to find a parameterization for the solid cylinder of height one and radius one.
Find a parameterization for the region that is inside the solid cylinder of height one and radius two and outside the cylinder of radius one.
Example 13. A common type of region to integrate over is one that is bounded by the graphs of two functions. Suppose RR is the region in ^(2){ }^{2} above the graph of y=g_(1)(x)y=g_{1}(x), below the graph of y=g_(2)(x)y=g_{2}(x) and between the lines x=ax=a and x=bx=b. A parameterization for RR (check this!) is given by
phi(x,t)=(x,tg_(2)(x)+(1-t)g_(1)(x)),a <= x <= b,0 <= t <= 1.\phi(x, t)=\left(x, \operatorname{tg}_{2}(x)+(1-t) g_{1}(x)\right), a \leq x \leq b, 0 \leq t \leq 1 .
2.22. Let RR be the region between the (polar) graphs of r=f_(1)(theta)r=f_{1}(\theta) and r=f_(2)(theta)r=f_{2}(\theta), where a <= theta <= ba \leq \theta \leq b. Find a parameterization for RR.
2.23. Find a parameterization for the region in ^(2){ }^{2} bounded by the ellipse whose xx-intercepts are 3 and -3 and yy-intercepts are 2 and -2. (Hint. Start with the parameterization given in Example 12.)
2.24. Sketch the region in ^(2){ }^{2} parameterized by the following:
phi(r,theta)=(2r cos theta,r sin theta)\phi(r, \theta)=(2 r \cos \theta, r \sin \theta)
where 1 <= r <= 21 \leq r \leq 2 and 0 <= theta <= pi//20 \leq \theta \leq \pi / 2.
3
Introduction to Forms
3.1 So what is a differential form?
A differential form is simply this: an integrand. In other words, it is a thing which can be integrated over some (often complicated) domain. For example, consider the following integral:0 x^(2)dxx^{2} d x. This notation indicates that we are integrating x^(2)x^{2} over the interval [0,1][0,1]. In this case, x^(2)dxx^{2} d x is a differential form. If you have had no exposure to this subject this may make you a little uncomfortable. After all, in calculus we are taught that x^(2)x^{2} is the integrand. The symbol " dx^(')d x^{\prime} is only there to delineate when the integrand has ended and what variable we are integrating with respect to. However, as an object in itself, we are not taught any meaning for " dxd x." Is it a function? Is it an operator on functions? Some professors call it an "infinitesimal" quantity. This is very tempting. After all, 0x^(2)dx0 x^{2} d x is defined to be the limit, as n rarr oon \rightarrow \infty, off sum_(i=1x^(2))^(n)Delta x\sum_{i=1 x^{2}}^{n} \Delta x, where {x_(i)}\left\{x_{i}\right\} are nn evenly spaced points in the interval [0,1][0,1], and Delta x=in\Delta x=i n. When we take the limit, the symbol " Sigma\Sigma " becomes " int\int," and the symbol " Delta x\Delta x " becomes " dxd x." This implies that dx=lim_(Delta_(x rarr0))Delta xd x=\lim _{\Delta_{x \rightarrow 0}} \Delta x, which is absurd. lim_(Delta_(x rarr0))Delta x=0!!\lim _{\Delta_{x \rightarrow 0}} \Delta x=0!! We are not trying to make the argument that the symbol " dxd x " should be eliminated. It does have meaning. This is one of the many mysteries that this book will reveal.
One word of caution here: not all integrands are differential forms. In fact, in the appendix we will see how to calculate arc length and surface area. These calculations involve integrands which are not differential forms. Differential forms are simply natural objects to
integrate, and also the first that one should study. As we shall see, this is much like beginning the study of all functions by understanding linear functions. The naive student may at first object to this, since linear functions are a very restrictive class. On the other hand, eventually we learn that any differentiable function (a much more general class) can be locally approximated by a linear function. Hence, in some sense, the linear functions are the most important ones. In the same way, one can make the argument that differential forms are the most important integrands.
3.2 Generalizing the integral
Let's begin by studying a simple example, and trying to figure out how and what to integrate. The function f(x,y)=y^(2)f(x, y)=y^{2} maps to Let MM denote the top half of the circle of radius one, centered at the origin. Let's restrict the function ff to the domain, MM, and try to integrate it. Here we encounter our first problem: The given description of MM is not particularly useful. If MM were something more complicated, it would have been much harder to describe it in words as we have just done. A parameterization is far easier to communicate, and far easier to use to determine which points of ^(2){ }^{2} are elements of MM, and which are not. But there are lots of parameterizations of MM. Here are two which we shall use:
phi_(1)(a)=(a,sqrt(1-a^(2)))", where "-1 <= a <= 1", "\phi_{1}(a)=\left(a, \sqrt{1-a^{2}}\right) \text {, where }-1 \leq a \leq 1 \text {, }
and
phi_(2)(t)=(cos(t),sin(t))", where "0 <= t <= pi". "\phi_{2}(t)=(\cos (t), \sin (t)) \text {, where } 0 \leq t \leq \pi \text {. }
Here is the trick: integrating ff over MM is difficult. It may not even be clear as to what this means. But perhaps we can use Psi_(1)\Psi_{1} to translate this problem into an integral over the interval [-1, 1]. After all, an integral is a big sum. If we add up all the numbers f(x,y)f(x, y) for all the points, (x,y)(x, y), of MM, shouldn't we get the same thing as if we
added up all the numbers f(varphi_(1)(a))f\left(\varphi_{1}(a)\right), for all the points, aa, of [-1,1][-1,1] (see Fig. 3.1)?
Fig. 3.1. Shouldn't the integral of ff over MM be the same as the integral of f_(v)varphif_{\mathrm{v}} \varphi over [-1, 1]?
Let's try it. Phi_(1)(a)=(a,sqrt(1-a^(2)))\Phi_{1}(a)=\left(a, \sqrt{1-a^{2}}\right), so f(varphi_(1)(a))=1-a^(2)f\left(\varphi_{1}(a)\right)=1-a^{2}. Hence, we are saying that the integral of ff over MM should be the same as-1(1{:a^(2))da\left.a^{2}\right) d a. Using -1 a little calculus, we can determine that this evaluates to 4//34 / 3
Let's try this again, this time using _(2)~_{2}. Using the same argument, the integral of ff over MM should be the same aso f(varphi_(2)(t))dt=0int_(0)f\left(\varphi_{2}(t)\right) d t=0 \int_{0}sin^(2)(t)dt=pi//2\sin ^{2}(t) d t=\pi / 2.
But hold on! The problem was stated before any parameterizations were chosen. Shouldn't the answer be independent of which one was picked? It would not be a very meaningful problem if two people could get different correct answers, depending on how they went about solving it. Something strange is going on!
3.3 Interlude: a review of single variable integration
In order to understand what happened, we must first review the definition of the Riemann integral. In the usual definition of the integral the first step is to divide bb the interval up into nn evenly spaced subintervals. Thus, int_(a)f(x)dx\int_{\mathrm{a}} f(x) d x is defined to be the limit, as nnrarr oo\rightarrow \infty, of sum_(i=1)^(n)f(x_(i))Delta x\sum_{i=1}^{n} f\left(x_{i}\right) \Delta x, where {x_(i)}\left\{x_{i}\right\} are nn evenly spaced points in the interval [a,b][a, b], and Delta i=(b-a)//n\Delta i=(b-a) / n. But what if the points {x_(i)}\left\{x_{i}\right\} are
not nn evenly spaced? We can still write down a reasonable sum: sum_(i=1)\sum_{i=1}f(x_(i))Deltax_(ij)f\left(x_{i}\right) \Delta x_{i j} where now Deltax_(i)=x_(i+1)-x_(i)\Delta x_{i}=x_{i+1}-x_{i}. In order to make the integral well-defined, we can no longer take the limit as n rarr oon \rightarrow \infty. Instead, we must let max{Deltax_(i)}rarr0\max \left\{\Delta x_{i}\right\} \rightarrow 0. It is a basic result of analysis that if this limit converges, then it does not matter how we picked the points {x_(i):}\left\{x_{i}\right. }; the limit will converge to the same number. It is this bb number that we define to be the value of af(x)dxa f(x) d x.
3.4 What went wrong?
We are now ready to figure out what happened in Section 3.2. Obviously, -1f(varphi_(1)(a))da-1 f\left(\varphi_{1}(a)\right) d a was not what we wanted. But let's not give up on our general approach just yet; it would still be great if we could use Phi_(1)\Phi_{1} to find some function that we can integrate on [-1,1][-1,1] that will give us the same answer as the integral of ff over MM. For now, let's call this mystery function " F(a)F(a)."
Let's look at the Riemann sum that we get for-1F(a)da, when we divide the interval up into nn pieces, each of width Delta a:^(i=1)F(a_(j))Delta a\Delta a:{ }^{i=1} F\left(a_{j}\right) \Delta a. Examine Figure 3.2 to see what happens to the points, a_(i)a_{i}, under th unction, Phi_(1)\Phi_{1}. Notice that the points {varphi_(1)(a_(i))}\left\{\varphi_{1}\left(a_{i}\right)\right\} are not evenly spaced along MM. To use these points to estimate the integral of ff over MM, we would have to use the approach from the previous nn section. A Riemann sum for ff over MM would be sum f(varphi_(1)(a_(i)))l_(i)\sum f\left(\varphi_{1}\left(a_{i}\right)\right) l_{i}, where the l_(i)il_{i} i represent the arc length, along MM, between varphi_(1)(a_(i))\varphi_{1}\left(a_{i}\right) a varphi_(1)(a_(i+1))\varphi_{1}\left(a_{i+1}\right).
This is a bit problematic, however, since arc-length is generally hard to calculate. Instead, we can approximate l_(i)l_{i} by substituting in the length of the line segment which connects varphi_(1)(a_(i))\varphi_{1}\left(a_{i}\right) to varphi_(1)(a_(i+1))\varphi_{1}\left(a_{i+1}\right), which we shall denote as L_(i)L_{i}. Note that this approximation gets better and better as we let n rarr oon \rightarrow \infty. Hence, when we take the limit, it does not matter if we use l_(i)l_{i} or L_(i)L_{i}.
So our goal is to find a function, F(a)F(a), on the interval [-1,1][-1,1], so that
Of course this equality will hold if F(a_(i))Delta a=f(varphi_(1)(a_(i)))L_(i)F\left(a_{i}\right) \Delta a=f\left(\varphi_{1}\left(a_{i}\right)\right) L_{i}. Solving, we get ^(F)(a_(i))=(f(phi_(1)(a_(i)))L_(i))/(Delta a)^{F}\left(a_{i}\right)=\frac{f\left(\phi_{1}\left(a_{i}\right)\right) L_{i}}{\Delta a} What happens to this function as Delta a rarr0\Delta a \rightarrow 0 ? First, note that L_(i)=|varphi_(1)(a_(i+1))-varphi_(1)(a_(i))|L_{i}=\left|\varphi_{1}\left(a_{i+1}\right)-\varphi_{1}\left(a_{i}\right)\right|. Hence,
{:[lim_(Delta a rarr0)F(a_(i))=lim_(Delta a rarr0)(f(phi_(1)(a_(i)))L_(i))/(Delta a)],[=lim_(Delta a rarr0)(f(phi_(1)(a_(i)))|phi_(1)(a_(i+1))-phi_(1)(a_(i))|)/(Delta a)],[=f(phi_(1)(a_(i)))lim_(Delta a rarr0)(|phi_(1)(a_(i+1))-phi_(1)(a_(i))|)/(Delta a)],[=f(phi_(1)(a_(i)))|lim_(Delta a rarr0)(phi_(1)(a_(i+1))-phi_(1)(a_(i)))/(Delta a)|]:}\begin{aligned}
\lim _{\Delta a \rightarrow 0} F\left(a_{i}\right) & =\lim _{\Delta a \rightarrow 0} \frac{f\left(\phi_{1}\left(a_{i}\right)\right) L_{i}}{\Delta a} \\
& =\lim _{\Delta a \rightarrow 0} \frac{f\left(\phi_{1}\left(a_{i}\right)\right)\left|\phi_{1}\left(a_{i+1}\right)-\phi_{1}\left(a_{i}\right)\right|}{\Delta a} \\
& =f\left(\phi_{1}\left(a_{i}\right)\right) \lim _{\Delta a \rightarrow 0} \frac{\left|\phi_{1}\left(a_{i+1}\right)-\phi_{1}\left(a_{i}\right)\right|}{\Delta a} \\
& =f\left(\phi_{1}\left(a_{i}\right)\right)\left|\lim _{\Delta a \rightarrow 0} \frac{\phi_{1}\left(a_{i+1}\right)-\phi_{1}\left(a_{i}\right)}{\Delta a}\right|
\end{aligned}
But lim Delta a rarr0(phi_(1)(a_(i+1))-phi_(1)(a_(i)))/(Delta a)\lim \Delta a \rightarrow 0 \frac{\phi_{1}\left(a_{i+1}\right)-\phi_{1}\left(a_{i}\right)}{\Delta a} is precisely the definition of the derivative of varphi_(1)\varphi_{1} at a_(ij)(dphi_(1))/(da)(a_(i))a_{i j} \frac{d \phi_{1}}{d a}\left(a_{i}\right). Hence, we have lim /_\a rarr0F(a_(i))=f(phi_(1)(a_(i)))|(dphi_(1))/(da)(a_(i))|\triangle a \rightarrow 0 F\left(a_{i}\right)=f\left(\phi_{1}\left(a_{i}\right)\right)\left|\frac{d \phi_{1}}{d a}\left(a_{i}\right)\right|. Finally, this means that the int_("is ")^(1)f(phi_(1)(a))|(dphi_(1))/(da)|da\int_{\text {is }}^{1} f\left(\phi_{1}(a)\right)\left|\frac{d \phi_{1}}{d a}\right| d a integral we want to compute is- 1
int_(-1)^(1)int_(-1)^(1)f(phi_(1)(a))|(dphi_(1))/(da)|da=int_(0)^(pi)f(phi_(2)(t))|(dphi_(2))/(dt)|dt\int_{-1}^{1} \int_{-1}^{1} f\left(\phi_{1}(a)\right)\left|\frac{d \phi_{1}}{d a}\right| d a=\int_{0}^{\pi} f\left(\phi_{2}(t)\right)\left|\frac{d \phi_{2}}{d t}\right| d t , using the function, ff, defined in Section 3.2.
Recall that (dphi_(1))/(da)\frac{d \phi_{1}}{d a} is a vector, based at the point varphi\varphi (a), tangent to MM. If we think of aa as a time parameter, then the length of (dphi_(1))/(da)\frac{d \phi_{1}}{d a} tells us how fast varphi_(1)(a)\varphi_{1}(a) is moving along MM. How can we generalize the integral, int_(-1)^(1)f(phi_(1)(a))|(dphi_(1))/(da)|\int_{-1}^{1} f\left(\phi_{1}(a)\right)\left|\frac{d \phi_{1}}{d a}\right| -1quad da-1 \quad d a ? Note that the bars |*||\cdot| denote a function that "eats" vectors and "spits out" real numbers. So we can generalize the integral by looking at other such functions. In other words, a
int_(-1)^(1)f(phi_(1)(a))omega((dphi_(1))/(da))da\int_{-1}^{1} f\left(\phi_{1}(a)\right) \omega\left(\frac{d \phi_{1}}{d a}\right) d a
more general integral would be-1 function of points and omega\omega is a function of vec ors.
It is not the purpose of the present work to undertake a study of integrating with respect to all possible functions, omega\omega. However, as with the study of functions of real variables, a natural place to start is with linear functions. This is the study of differential forms. A differential form is precisely a linear function which eats vectors, spits out numbers and is used in integration. The strength of differential forms lies in the fact that their integrals do not depend on a choice of parameterization.
3.5 What about surfaces?
Let's repeat the previous discussion (faster this time), bumping everything up a dimension. Let f:^(3)rarrf:{ }^{3} \rightarrow be given by f(x,y,z)=f(x, y, z)=z^(2)z^{2}. Let MM be the top half of the sphere of radius one, centered at the origin. We can parameterize MM by the function, varphi\varphi, where varphi(r,theta)=(r\varphi(r, \theta)=(r{: cos(theta),r sin(theta),sqrt(1-r^(2))),0 <= r <= 1\left.\cos (\theta), r \sin (\theta), \sqrt{1-r^{2}}\right), 0 \leq r \leq 1, and 0 <= theta <= 2pi0 \leq \theta \leq 2 \pi. Again, our goal is not to figure out how to actually integrate ff over MM, but to use varphi\varphi to set up an equivalent integral over the rectangle, R=[0,1]R=[0,1]xx[0,2pi]\times[0,2 \pi].
Let {x_(i,j)}\left\{x_{i, j}\right\} be a lattice of evenly spaced points in RR. Let Delta r=x_(i+1,j)^(-)\Delta r=x_{i+1, j}{ }^{-}x_(i,j)x_{i, j}, and Delta theta=x_(i,j+1)-x_(i,j)\Delta \theta=x_{i, j+1}-x_{i, j}. By definition, the integral over RR of a function, F(x)F(x), is equal to lim_(Delta r)Deltatheta_(rarr0)Sigma F(x_(i,j))Delta r Delta theta\lim _{\Delta r} \Delta \theta_{\rightarrow 0} \Sigma F\left(x_{i, j}\right) \Delta r \Delta \theta.
To use the mesh o points, varphi(x_(i,j))\varphi\left(x_{i, j}\right), in MM to set up a Riemann sum, we write down the following sum: sum f(varphi(x_(i,j)))\sum f\left(\varphi\left(x_{i, j}\right)\right) Area (L_(i,j))\left(L_{i, j}\right), where L_(i,j)L_{i, j} is the rectangle spanned by the vectors varphi(x_(i+1,j))-varphi(x_(i,j))\varphi\left(x_{i+1, j}\right)-\varphi\left(x_{i, j}\right) and varphi(x_(i,j+1))\varphi\left(x_{i, j+1}\right)varphi(x_(i,j))\varphi\left(x_{i, j}\right). If we want our Riemann sum over RR to equal this sum, then we end up with F(x_(i,j))=(f(phi(x_(i,j)))Area(L_(i,j)))/(Delta r Delta theta)F\left(x_{i, j}\right)=\frac{f\left(\phi\left(x_{i, j}\right)\right) \operatorname{Area}\left(L_{i, j}\right)}{\Delta r \Delta \theta}.
Fig. 3.3. Setting up the Riemann sum for the integral of z^(2)z^{2} over the top half of the sphere of radius one.
We now leave it as an exercise to show that as Delta r\Delta r and Delta theta\Delta \theta get small, (" Area "(L_(i,j)))/(Delta r Delta theta)\frac{\text { Area }\left(L_{i, j}\right)}{\Delta r \Delta \theta} converges to the area of the parallelogram spanned by the vectors (del phi)/(del r)(x_(i,j))\frac{\partial \phi}{\partial r}\left(x_{i, j}\right) and (del phi)/(del theta)(x_(i,j))\frac{\partial \phi}{\partial \theta}\left(x_{i, j}\right). The upshot of all this is that the integral we want to evaluate is the following:
int_(R)f(phi(r,theta))Area((del phi)/(del r),(del phi)/(del theta))drd theta\int_{R} f(\phi(r, \theta)) \operatorname{Area}\left(\frac{\partial \phi}{\partial r}, \frac{\partial \phi}{\partial \theta}\right) d r d \theta
3.2. Compute the value of this integral for the function f(x,y,z)=f(x, y, z)=z^(2)z^{2}.
The point of all this is not the specific integral that we have arrived at, but the form of the integral. We integrate f_(v)varphif_{\mathrm{v}} \varphi (as in the previous section), times a function which takes two vectors and returns a real number. Once again, we can generalize this by using other such functions:
int_(R)f(phi(r,theta))omega((del phi)/(del r),(del phi)/(del theta))drd theta.\int_{R} f(\phi(r, \theta)) \omega\left(\frac{\partial \phi}{\partial r}, \frac{\partial \phi}{\partial \theta}\right) d r d \theta .
In particular, if we examine linear functions for omega\omega, we arrive at a differential form. The moral is that if we want to perform an integral over a region parameterized by as in the previous section, then
we need to multiply by a function which takes a vector and returns a number. If we want to integrate over something parameterized by 2 , then we need to multiply by a function which takes two vectors and returns a number. In general, an nn-form is a linear function which takes nn vectors and returns a real number. One integrates nn forms over regions that can be parameterized by nn. Their strength is that the value of such an integral does not depend on the choice of parameterization.
4
Forms
4.1 Coordinates for vectors
Before we begin to discuss functions of vectors, we first need to learn how to specify a vector. And before we can answer that, we must first learn where vectors live. In Figure 4.1 we see a curve, CC, and a tangent line to that curve. The line can be thought of as the set of all tangent vectors at the point, pp. We denote that line as T_(p)CT_{p} C, the tangent space to CC at the point pp.
Fig. 4.1. T_(p)CT_{p} C is the set of all vectors tangents to CC at pp.
What if CC is actually a straight line? Will T_(p)CT_{p} C be the same line? To answer this, let's instead think about the real number line, L=1L=1. Suppose pp is the point corresponding to the number 2 on LL. We would like to understand T_(p)LT_{p} L, the set of all vectors tangent to LL at the point pp. For example, where would you draw a vector of length three? Would you put its base at the origin on LL ? Of course not. You
would put its base at the point pp. This is really because the origin for T_(p)LT_{p} L is different than the origin for LL. We are thus thinking about LL and T_(p)LT_{p} L as two different lines, placed right on top of each other.
The key to understanding the difference between LL and T_(p)LT_{p} L is their coordinate systems. Let's pause here for a moment to look a little more closely. What are "coordinates" anyway? They are a way of assigning a number (or, more generally, a set of numbers) to a point in space. In other words, coordinates are functions which take points of a space and return (sets of) numbers. When we say that the xx-coordinate of pp in ^(2){ }^{2} is 5 , we really mean that we have a function, x:R^(2)rarrx: \mathbb{R}^{2} \rightarrow, such that x(p)=5x(p)=5.
Of course we need two numbers to specify a point in a plane, which means that we have two coordinate functions. Suppose we denote the plane by PP and x:P rarrRx: P \rightarrow \mathbb{R} and y:P rarrary: P \rightarrow \mathbb{a r}𝕒𝕣 coordinate functions. Then, saying that the coordinates of a point, p_(", ")p_{\text {, }} are (2,3)(2,3) is the same thing as saying that x(p)=2x(p)=2 and y(p)=3y(p)=3. In other words, the coordinates of pp are (x(p),y(p))(x(p), y(p)).
So what do we use for coordinates in the tangent space? Well, first we need a basis for the tangent space of PP at pp. In other words, we need to pick two vectors which we can use to give the relative positions of all other points. Note that if the coordinates of pp are ( xx, y) then (d(x+t,y))/(dt)=(:1,0:)\frac{d(x+t, y)}{d t}=\langle 1,0\rangle and (d(x,y+t))/(dt)=(:0,1:)\frac{d(x, y+t)}{d t}=\langle 0,1\rangle. We have switched to the notation " (:*,*:)\langle\cdot, \cdot\rangle " to indicate that we are not talking about points of PP anymore, but rather vectors in T_(p)PT_{p} P. We take these two vectors to be a basis for T_(p)PT_{p} P. In other words, any point of T_(p)PT_{p} P can be written as dx(:0,1:)+dy(:1,0:)d x\langle 0,1\rangle+d y\langle 1,0\rangle, where dx,dy inRd x, d y \in \mathbb{R}. Hence, " dxd x " and " dyd y " are coordinate functions for T_(p)PT_{p} P. Saying that the coordinates of a vector VV in T_(p)PT_{p} P are (:2,3:)\langle 2,3\rangle, for example, is the same thing as saying that dx(V)=2d x(V)=2 and dy(V)=3d y(V)=3. In general, we may refer to the coordinates of an arbitrary vector in T_(p)PT_{p} P as (:dx,dy:)\langle d x, d y\rangle, just as we may refer to the coordinates of an arbitrary point in PP as (x,y)(x, y).
It will be helpful in the future to be able to distinguish between the vector (:2,3:)\langle 2,3\rangle in T_(p)PT_{p} P and the vector (:2,3:)\langle 2,3\rangle in T_(q)PT_{q} P, where p!=qp \neq q.
We will do this by writing (:2,3:)_(p)\langle 2,3\rangle_{p} for the former and (:2,3:)_(q)\langle 2,3\rangle_{q} for the latter.
Let's pause for a moment to address something that may have been bothering you since your first term of calculus. Let's look at the tangent line to the graph of y=x^(2)y=x^{2} at the point (1,1)(1,1). We are no longer thinking of this tangent line as lying in the same plane that the graph does. Rather, it lies in T_((1,1))^(2)T_{(1,1)}{ }^{2}. The horizontal axis for T_((1,1))^(2)T_{(1,1)}{ }^{2} is the " dx^(')d x^{\prime} axis and the vertical axis is the " dy^(')d y^{\prime} axis (see Fig. 4.2). Hence, we can write the equation of the tangent line as dy=d y=2dx2 d x. We can rewrite this as (dy)/(dx)=2\frac{d y}{d x}=2. Look familiar? This is one explanation of why we use the notation (dy)/(dx)\frac{d y}{d x} in calculus to denote the derivative.
4.1.
Draw a vector with dx=1,dy=2d x=1, d y=2 in the tangent space T_((1,-1))T_{(1,-1)} 2.
Draw (:-3,1:)_((0,1))\langle-3,1\rangle_{(0,1)}.
Fig. 4.2. The line, ll, lies in T_((1,1))^(2)T_{(1,1)}{ }^{2}. Its equation is dy=2dxd y=2 d x.
4.2 1-forms
Recall from the previous chapter, that a 1 -form is a linear function which acts on vectors and returns numbers. For the moment let's just look at 1 -forms on T_(p)^(2)T_{p}{ }^{2} for some fixed point, pp. Recall that a linear function, omega\omega, is just one whose graph is a plane through the origin. Hence, we want to write down an equation of a plane though the origin in T_(p)^(2)xxT_{p}{ }^{2} \times, where one axis is labelled dxd x, another dyd y and the third, omega\omega (see Fig. 4.3). This is easy: omega=adx+bdy\omega=a d x+b d y. Hence, to specify a 1 -form on T_(p)^(2)T_{p}{ }^{2} we only need to know two numbers: aa and bb.
Here is a quick example. Suppose omega((:dx,dy:))=2dx+3dy\omega(\langle d x, d y\rangle)=2 d x+3 d y, then
The alert reader may see something familiar here: the dot product. That is, omega((:-1,2:))=(:2,3:)*(:-1,2:)\omega(\langle-1,2\rangle)=\langle 2,3\rangle \cdot\langle-1,2\rangle. Recall the geometric interpretation of the dot product; you project (:-1,2:)\langle-1,2\rangle onto (:2,3:)\langle 2,3\rangle and then multiply by |(:2,3:)|=sqrt()13|\langle 2,3\rangle|=\sqrt{ } 13. In other words:
Evaluating a 1 -form on a vector is the same as projecting onto some line and then multiplying by some constant.
Fig. 4.3. The graph of omega\omega is a plane though the origin.
In fact, we can even interpret the act of multiplying by a constant geometrically. Suppose omega\omega is given by adx+bdya d x+b d y. Then the value of omega(V_(1))\omega\left(V_{1}\right) is the length of the projection of V_(1)V_{1} onto the line, II, where ((:a,b:))/(|(:a,b:)|^(2))\frac{\langle a, b\rangle}{|\langle a, b\rangle|^{2}} is a basis vector for ll.
This interpretation has a huge advantage... it is coordinate free. Recall from the previous section that we can think of the plane, PP, as existing independent of our choice of coordinates. We only pick coordinates so that we can communicate to someone else the location of a point. Forms are similar. They are objects that exist independently of our choice of coordinates. This is one key as to why they are so useful outside of mathematics.
There is still another geometric interpretation of 1-forms. Let's first look at the simple example omega((:dx,dy:))=dx\omega(\langle d x, d y\rangle)=d x. This 1 -form simply returns the first coordinate of whatever vector you feed into it. This is also a projection; it's the projection of the input vector onto the dxd x-axis. This immediately gives us a new interpretation of the action of a general 1-form, omega=adx+bdy\omega=a d x+b d y.
Evaluating a 1 -form on a vector is the same as projecting onto each coordinate axis, scaling each by some constant and adding the results.
Although this interpretation is more cumbersome, it is the one that will generalize better when we get to nn-forms.
Let's move on now to 1 -forms in nn dimensions. If p in np \in n, then we can write pp in coordinates as (x_(1),x_(2),dots,x_(n))\left(x_{1}, x_{2}, \ldots, x_{n}\right). The coordinates for a vector in T_(p)^(n)T_{p}{ }^{n} are (:dx_(1),dx_(2),dots,dx_(n):)\left\langle d x_{1}, d x_{2}, \ldots, d x_{n}\right\rangle. A 1-form is a linear function, omega\omega, whose graph (in T_(p)^(n)xxT_{p}{ }^{n} \times ) is a plane through the origin. Hence, we can write it as omega=a_(1)dx_(1)+a_(2)dx_(2)+dots+a_(n)dx_(n)\omega=a_{1} d x_{1}+a_{2} d x_{2}+\ldots+a_{n} d x_{n}. Again, this can be thought of as either projecting onto the vector (:a_(1),a_(2),dots,a_(n):)\left\langle a_{1}, a_{2}, \ldots, a_{n}\right\rangle and then multiplying by |(:a_(1),a_(2),dots,a_(n):)|\left|\left\langle a_{1}, a_{2}, \ldots, a_{n}\right\rangle\right| or as projecting onto each coordinate axis, multiplying by a_(j)a_{j}, and then adding.
4.2. Let omega((:dx,dy:))=-dx+4dy\omega(\langle d x, d y\rangle)=-d x+4 d y.
Compute omega((:1,0:)),omega((:0,1:))\omega(\langle 1,0\rangle), \omega(\langle 0,1\rangle) and omega(2,3:))\omega(2,3\rangle).
What line does omega\omega project vectors onto?
4.3. Find a 1 -form which computes the length of the projection of a vector onto the indicated line, multiplied by the indicated constant cc.
The dxd x-axis, c=3c=3.
The dyd y-axis, c=1//2c=1 / 2.
Find a 1 -form that does both of the two preceding operations and adds the result.
The line dy=3//4dx,c=10d y=3 / 4 d x, c=10.
4.4. If omega\omega is a 1 -form show
omega(V_(1)+V_(2))=omega(V_(1))+omega(V_(2))\omega\left(V_{1}+V_{2}\right)=\omega\left(V_{1}\right)+\omega\left(V_{2}\right), for any vectors V_(1)V_{1} and V_(2)V_{2}.
omega(cV)=c omega(V)\omega(c V)=c \omega(V), for any vector VV and constant cc.
4.3 Multiplying 1-forms
In this section we would like to explore a method of multiplying 1forms. You may think, "What is the big deal? If omega\omega and vv are 1 -forms can't we just define omega*v(V)=omega(V)*v(V)\omega \cdot v(V)=\omega(V) \cdot v(V) ?" Well, of course we can, but then omega*v\omega \cdot v is not a linear function, so we have left the world of forms.
The trick is to define the product of omega\omega and vv to be a 2 -form. So as not to confuse this with the product just mentioned, we will use the symbol " Lambda\Lambda " (pronounced "wedge") to denote multiplication. So how can we possibly define omega^^vv\omega \wedge \vee to be a 2-form? We must define how it acts on a pair of vectors, (V_(1),V_(2))\left(V_{1}, V_{2}\right).
Note first that there are four ways to combine all the ingredients:
The first two of these are associated with V_(1)V_{1} and the second two with V_(2)V_{2}. In other words, omega\omega and vv together give a way of taking each vector and returning a pair of numbers. And how do we visualize pairs of numbers? In the plane, of course! Let's define a new plane with one axis as the omega\omega-axis and the other as the vv-axis. So, the coordinates of V_(1)V_{1} in this plane are [omega(V_(1)),v(V_(1))]\left[\omega\left(V_{1}\right), \mathrm{v}\left(V_{1}\right)\right] and the coordinates of V_(2)V_{2} are [omega(V_(2)),v(V_(2))]\left[\omega\left(V_{2}\right), \mathrm{v}\left(V_{2}\right)\right]. Note that we have switched to the notation " [:',*][\because, \cdot] " to indicate that we are describing points in a new plane. This may seem a little confusing at first. Just keep in mind that when we write something like (1,2)(1,2) we are describing the location of a point in the xyx y-plane, whereas (:1,2:)\langle 1,2\rangle describes a vector in the dxdyd x d y-plane and [1,2][1,2] is a vector in the omega V\omega V-plane.
Let's not forget our goal now. We wanted to use omega\omega and vv to take the pair of vectors, (V_(1),V_(2))\left(V_{1}, V_{2}\right), and return a number. So far all we have done is to take this pair of vectors and return another pair of vectors. But do we know of a way to take these vectors and get a number? Actually, we know several, but the most useful one turns out to be the area of the parallelogram that the vectors span. This is precisely what we define to be the value of omega^^v(V_(1),V_(2))\omega \wedge v\left(V_{1}, V_{2}\right) (see Fig. 4.4).
Fig. 4.4. The product of omega\omega and vv.
Example 14. Let omega=2dx-3dy+dz\omega=2 d x-3 d y+d z and v=dx+2dy-dz\mathrm{v}=d x+2 d y-d z be two 1-forms on T_(p)^(3)T_{p}{ }^{3} for some fixed p in}^(3)\left.p \in\right\}^{3}. Let's evaluate omega^^vv\omega \wedge \vee on the pair of vectors, ((:1,3,1:),(:2,-1,3:))(\langle 1,3,1\rangle,\langle 2,-1,3\rangle). First we compute the [omega,v][\omega, v] coordinates of the vector (:1,3,1:)\langle 1,3,1\rangle.
Similarly, we compute [omega((:2,-1,3:)),v(2,-1,3:))]=[10,-3][\omega(\langle 2,-1,3\rangle), \mathrm{v}(2,-1,3\rangle)]=[10,-3]. Finally, the area of the parallelogram spanned by [-6,6][-6,6] and [10,-3][10,-3] is
Should we have taken the absolute value? Not if we want to define a linear operator. The result of omega^^v\omega \wedge v is not just an area, it is a signed area; it can either be positive or negative. We will see a geometric interpretation of this soon. For now, we define:
4.5. Let omega\omega and vv be the following 1 -forms:
{:[omega((dx","dy:))=2dx-3dy],[v((:dx","dy:))=dx+dy]:}\begin{gathered}
\omega((d x, d y\rangle)=2 d x-3 d y \\
v(\langle d x, d y\rangle)=d x+d y
\end{gathered}
Let V_(1)=(:-1,2:)V_{1}=\langle-1,2\rangle and V_(2)=(:1,1:)V_{2}=\langle 1,1\rangle. Compute omega(V_(1)),v(V_(1)),omega(V_(2):}\omega\left(V_{1}\right), v\left(V_{1}\right), \omega\left(V_{2}\right. ) and V(V_(2))\mathrm{V}\left(V_{2}\right).
Use your answers to the previous question to compute omega^^vv\omega \wedge \vee(V_(1),V_(2))\left(V_{1}, V_{2}\right).
Find a constant cc such that omega^^v=cdx^^dy\omega \wedge v=c d x \wedge d y.
4.6. omega^^v(V_(1),V_(2))=-omega^^v(V_(2),V_(1))(omega^^v\omega \wedge v\left(V_{1}, V_{2}\right)=-\omega \wedge v\left(V_{2}, V_{1}\right)(\omega \wedge v is skew-symmetric).
4.7. omega^^v(V,V)=0\omega \wedge v(V, V)=0. (This follows immediately from the previous exercise. It should also be clear from the geometric interpretation.)
4.8. omega^^v(V_(1)+V_(2),V_(3))=omega^^v(V_(1),V_(3))+omega^^v(V_(2),V_(3))\omega \wedge v\left(V_{1}+V_{2}, V_{3}\right)=\omega \wedge v\left(V_{1}, V_{3}\right)+\omega \wedge v\left(V_{2}, V_{3}\right) and omega^^\omega \wedgev(cV_(1),V_(2))=omega^^v(V_(1),cV_(2))=c omega^^v(V_(1),V_(2))v\left(c V_{1}, V_{2}\right)=\omega \wedge v\left(V_{1}, c V_{2}\right)=c \omega \wedge v\left(V_{1}, V_{2}\right), where cc is any real number ( omega^^vv\omega \wedge \vee is bilinear).
4.9. omega^^vv(V_(1),V_(2))=-vv^^omega(V_(1),V_(2))\omega \wedge \vee\left(V_{1}, V_{2}\right)=-\vee \wedge \omega\left(V_{1}, V_{2}\right).
It is interesting to compare Problems 4.6 and 4.9. Problem 4.6 says that the 2 -form, omega^^v\omega \wedge v, is a skew-symmetric operator on pairs of vectors. Problem 4.9 says that ^^\wedge can be thought of as a skewsymmetric operator on 1 -forms.
4.10. omega^^omega(V_(1),V_(2))=0\omega \wedge \omega\left(V_{1}, V_{2}\right)=0.
4.11. (omega+vv)^^Psi=omega^^Psi+vv^^Psi(^^(\omega+\vee) \wedge \Psi=\omega \wedge \Psi+\vee \wedge \Psi(\wedge is distributive )).
There is another way to interpret the action of omega^^v\omega \wedge v which is much more geometric. First let omega=adx+bdy\omega=a d x+b d y be a 1-form on T_(p)T_{p}^(2){ }^{2}. Then we let (:omega:)\langle\omega\rangle be the vector (:a,b:)\langle a, b\rangle.
4.12. Let omega\omega and vv be 1 -forms on T_(p)T_{p}. Show that omega^^v(V_(1),V_(2))\omega \wedge v\left(V_{1}, V_{2}\right) is the area of the parallelogram spanned by V_(1)V_{1} and V_(2)V_{2}, times the area of the parallelogram spanned by (:omega:)\langle\omega\rangle and (:v:)\langle\mathrm{v}\rangle.
4.13. Use the previous problem to show that if omega\omega and vv are 1 -forms on ^(2){ }^{2} such that omega^^v=0\omega \wedge v=0 then there is a constant cc such that omega=\omega= cV.
There is also a more geometric way to think about omega^^v\omega \wedge v if omega\omega and V are 1 -forms on T_(rho)^(3)T_{\rho}{ }^{3}, although it will take us some time to develop the idea. Suppose omega=adx+bdy+cdz\omega=a d x+b d y+c d z. Then we will denote the vector (:a,b,c:)\langle a, b, c\rangle as (:omega:)\langle\omega\rangle. From the previous section, we know that if VV is any vector, then omega(V)=(:omega:)*v\omega(V)=\langle\omega\rangle \cdot v, and that this is just the projection of VV onto the line containing (:omega:)\langle\omega\rangle, times |(:omega:)||\langle\omega\rangle|.
Now suppose v is some other 1-form. Choose a scalar xx so that v_(v)\mathrm{v}_{\mathrm{v}}-x omega:)-x \omega\rangle is perpendicular to (:omega:)\langle\omega\rangle. Let v_(omega)=v-x omegav_{\omega}=v-x \omega. Note that omega^^v_(omega)\omega \wedge v_{\omega}=omega^^(v-x omega)=omega^^v-x omega^^omega=omega^^v=\omega \wedge(v-x \omega)=\omega \wedge v-x \omega \wedge \omega=\omega \wedge v. Hence, any geometric interpretation we find for the action of omega^^v_(omega)\omega \wedge v_{\omega} is also a geometric interpretation of the action of omega^^v\omega \wedge v.
Finally, we let ^( bar(omega))=(omega)/(|(:omega:)|){ }^{\bar{\omega}}=\frac{\omega}{|\langle\omega\rangle|} and ^( bar(omega)) bar(omega)=(v_(omega))/(|(:omega_(omega):):))^{\bar{\omega}} \bar{\omega}=\frac{v_{\omega}}{\left|\left\langle\omega_{\omega}\right\rangle\right\rangle}. Note that these are 1forms such that (:omega:)\langle\omega\rangle and (:v_(omega):)\left\langle v_{\omega}\right\rangle are perpendicular unit vectors. We will now present a geometric interpretation of the action of omega^^v_(omega)\omega \wedge v_{\omega} on a pair of vectors, (V_(1),V_(2))\left(V_{1}, V_{2}\right).
First, note that since (:omega:)\langle\omega\rangle is a unit vector then omega(V_(1))\omega\left(V_{1}\right) is just the projection of V_(1)V_{1} onto the line containing (:omega:)\langle\omega\rangle. Similarly, v_(omega)(V_(1))v_{\omega}\left(V_{1}\right) is
given by projecting V_(1)V_{1} onto the line containing (:v omega:)\langle v \omega\rangle. As (:omega:)\langle\omega\rangle and (:v\langle vomega\omega ) are perpendicular, we can think of the quantity as the area of parallelogram spanned by V_(1)V_{1} and V_(2)V_{2}, projected onto the plane containing the vectors (:omega:)\langle\omega\rangle and (:v_(omega):)\left\langle v_{\omega}\right\rangle. This is the same plane as the one which contains the vectors (:omega:)\langle\omega\rangle and (:v:)\langle\mathrm{v}\rangle.
Finally, note that since (:omega:)\langle\omega\rangle and (:v_(omega):)\left\langle v_{\omega}\right\rangle are perpendicular, the quantity |(:omega:)||(:v_(omega):)||\langle\omega\rangle|\left|\left\langle v_{\omega}\right\rangle\right| is just the area of the rectangle spanned by these two vectors. Furthermore, the parallelogram spanned by the vectors (:omega:)\langle\omega\rangle and (:v:)\langle\mathrm{v}\rangle is obtained from this rectangle by skewing. Hence, they have the same area. We conclude
Evaluating omega^^v\omega \wedge v on the pair of vectors (V_(1),V_(2))\left(V_{1}, V_{2}\right) gives the area of parallelogram spanned by V_(1)V_{1} and V_(2)V_{2} projected onto the plane containing the vectors (:omega:)\langle\omega\rangle and (:v:)\langle\mathrm{v}\rangle, and multiplied by the area of the parallelogram spanned by (:omega:)\langle\omega\rangle and (:v:)\langle\mathrm{v}\rangle.
CAUTION: While every 1 -form can be thought of as projected length not every 2 -form can be thought of as projected area. The only 2 -forms for which this interpretation is valid are those that are the product of 1 -forms. See Problem 4.18.
Let's pause for a moment to look at a particularly simple 2 -form on T_(p)^(3),dx^^dyT_{p}{ }^{3}, d x \wedge d y. Suppose V_(1)=(:a_(1),a_(2),a_(3):)V_{1}=\left\langle a_{1}, a_{2}, a_{3}\right\rangle and V_(2)=(:b_(1),b_(2),b_(3):)V_{2}=\left\langle b_{1}, b_{2}, b_{3}\right\rangle. Then
dx^^dy(V_(1),V_(2))=|[a_(1)b_(1)],[a_(2)]b_(2)|.d x \wedge d y\left(V_{1}, V_{2}\right)=\left|\begin{array}{l}
a_{1} b_{1} \\
a_{2}
\end{array} b_{2}\right| .
This is precisely the (signed) area of the parallelogram spanned by V_(1)V_{1} and V_(2)V_{2} projected onto the dxdyd x d y-plane.
4.14. omega^^vv((:a_(1),a_(2),a_(3):),(:b_(1),b_(2),b_(3):))=c_(1)dx^^dy+c_(2)dx^^dz+\omega \wedge \vee\left(\left\langle a_{1}, a_{2}, a_{3}\right\rangle,\left\langle b_{1}, b_{2}, b_{3}\right\rangle\right)=c_{1} d x \wedge d y+c_{2} d x \wedge d z+c_(3)dy^^dzc_{3} d y \wedge d z, for some real numbers c_(1),c_(2)c_{1}, c_{2} and c_(3)c_{3}.
The preceding comments, and this last exercise, give the following geometric interpretation of the action of a 2-form on the pair of vectors, (V_(1),V_(2))\left(V_{1}, V_{2}\right) :
Every 2-form projects the parallelogram spanned by V_(1)V_{1} and V_(2)V_{2} onto each of the (2-dimensional) coordinate planes, computes the resulting (signed) areas, multiplies each by some constant, and adds the results.
This interpretation holds in all dimensions. Hence, to specify a 2 form we need to know as many constants as there are 2dimensional coordinate planes. For example, to give a 2 -form in 4 dimensional Euclidean space we need to specify six numbers:
c_(1)dx^^dy+c_(2)dx^^dz+c_(3)dx^^dw+c_(4)dy^^dz+c_(5)dy^^dw+c_(6)dz^^dwc_{1} d x \wedge d y+c_{2} d x \wedge d z+c_{3} d x \wedge d w+c_{4} d y \wedge d z+c_{5} d y \wedge d w+c_{6} d z \wedge d w
The skeptic may argue here. Problem 4.14 only shows that a 2 form which is a product of 1 -forms can be thought of as a sum of projected, scaled areas. What about an arbitrary 2 -form? Well, to address this, we need to know what an arbitrary 2 -form is! Up until now we have not given a complete definition. Henceforth, we shall define a 2 -form to be a bilinear, skew-symmetric, real-valued function on T_(p)^(n)xxT_(p)^(n)T_{p}{ }^{n} \times T_{p}{ }^{n}. That is a mouthful. This just means that it is an operator which eats pairs of vectors, spits out real numbers, and satisfies the conclusions of Problems 4.6 and 4.8. Since these are the only ingredients necessary to do Problem 4.14, our geometric interpretation is valid for all 2 -forms.
4.15. If omega((:dx,dy,dz:))=dx+5dy-dz\omega(\langle d x, d y, d z\rangle)=d x+5 d y-d z, and v((:dx,dy,dz:))=2dx-\mathrm{v}(\langle d x, d y, d z\rangle)=2 d x-dy+dzd y+d z, compute
4.16. Let omega((:dx,dy,dz:))=dx+5dy-dz\omega(\langle d x, d y, d z\rangle)=d x+5 d y-d z and V((:dx,dy,dz:))=2dx-\mathrm{V}(\langle d x, d y, d z\rangle)=2 d x-dy+dzd y+d z. Find constants c_(1),c_(2)c_{1}, c_{2} and c_(3)c_{3}, such that
omega^^nu=c_(1)dx^^dy+c_(2)dy^^dz+c_(3)dx^^dz\omega \wedge \nu=c_{1} d x \wedge d y+c_{2} d y \wedge d z+c_{3} d x \wedge d z
4.17. Express each of the following as the product of two 1 -forms:
3dx^^dy+dy^^dx3 d x \wedge d y+d y \wedge d x.
dx^^dy+dx^^dzd x \wedge d y+d x \wedge d z.
3dx^^dy+dy^^dx+dx^^dz3 d x \wedge d y+d y \wedge d x+d x \wedge d z.
dx^^dy+3dz^^dy+4dx^^dzd x \wedge d y+3 d z \wedge d y+4 d x \wedge d z.
4.4 2-forms on Tp^(3)\operatorname{Tp}^{\mathbf{3}} (optional)
4.18. Find a 2 -form which is not the product of 1 -forms.
In doing this exercise, you may guess that, in fact, all 2-forms on T_(p)^(3)T_{p}{ }^{3} can be written as a product of 1-forms. Let's see a proof of this fact that relies heavily on the geometric interpretations we have developed.
Recall the correspondence introduced above between vectors and 1 -forms. If a=a_(1)dx+a_(2)dy+a_(3)dza=a_{1} d x+a_{2} d y+a_{3} d z then we let (:a:)=(:a_(1),a_(2),a_(3):)\langle a\rangle=\left\langle a_{1}, a_{2}, a_{3}\right\rangle. If VV is a vector, then we let (:v:)^(-1)\langle v\rangle^{-1} be the corresponding 1 -form.
We now prove two lemmas:
Lemma 1. If aa and beta\beta are 1 -forms on T_(p)^(3)T_{p}{ }^{3} and VV is a vector in the plane spanned by (:a:)\langle a\rangle and (:beta:)\langle\beta\rangle, then there is a vector, WW, in this plane such that a^^beta=(:v:)^(-1)^^(:w:)^(-1)a \wedge \beta=\langle v\rangle^{-1} \wedge\langle w\rangle^{-1}.
Proof. The proof of the above lemma relies heavily on the fact that 2 -forms which are the product of 1 -forms are very flexible. The 2form a^^betaa \wedge \beta takes pairs of vectors, projects them onto the plane spanned by the vectors (:a:)\langle a\rangle and (:beta:)\langle\beta\rangle, and computes the area of the resulting parallelogram times the area of the parallelogram spanned by (:a:)\langle a\rangle and (:beta:)\langle\beta\rangle. Note, that for every non-zero scalar cc, the area of the parallelogram spanned by (:a:)\langle a\rangle and (:beta:)\langle\beta\rangle is the same as the area of the parallelogram spanned by c(:a:)\mathrm{c}\langle a\rangle and 1//c(:beta:)1 / c\langle\beta\rangle. (This is the same thing as saying that a^^beta=ca^^1//c betaa \wedge \beta=c a \wedge 1 / c \beta.) The important point here is that we can scale one of the 1 -forms as much as we want at the expense of the other and get the same 2 -form as a product.
Another thing we can do is apply a rotation to the pair of vectors 〈 a) and (:beta:)\langle\beta\rangle in the plane which they determine. As the area of the
parallelogram spanned by these two vectors is unchanged by rotation, their product still determines the same 2 -form. In particular, suppose VV is any vector in the plane spanned by (:a:)\langle a\rangle and (:beta:)\langle\beta\rangle. Then we can rotate (:a:)\langle a\rangle and (:beta:)\langle\beta\rangle to (:a:)\langle a\rangle and (:beta:)\langle\beta\rangle so that c(:d^('):)=Vc\left\langle d^{\prime}\right\rangle=V, for some scalar cc. We can then replace the pair (:(a:),(:beta:))\langle(a\rangle,\langle\beta\rangle) with the pair ( c(:a:)c\langle a\rangle, {:1//c(:beta^('):))=(V,1//c(:beta:))\left.1 / c\left\langle\beta^{\prime}\right\rangle\right)=(V, 1 / c\langle\beta\rangle). To complete the proof, let W=1//c(:beta^('):)W=1 / c\left\langle\beta^{\prime}\right\rangle.
Lemma 2. If omega_(1)=a_(1)^^beta_(1)\omega_{1}=a_{1} \wedge \beta_{1} and omega_(2)=a_(2)^^beta_(2)\omega_{2}=a_{2} \wedge \beta_{2} are 2 -forms on T_(p)^(3)T_{p}{ }^{3}, then there exist 1-forms, a_(3)a_{3} and beta_(3)\beta_{3}, such that omega_(1)+omega_(2)=a_(3)^^beta_(3)\omega_{1}+\omega_{2}=a_{3} \wedge \beta_{3}. Proof. Let's examine the sum, a_(1)^^beta_(1)+a_(2)^^beta_(2)a_{1} \wedge \beta_{1}+a_{2} \wedge \beta_{2}. Our first case is that the plane spanned by the pair ((:a_(1):),(:beta_(1):))\left(\left\langle a_{1}\right\rangle,\left\langle\beta_{1}\right\rangle\right) is the same as the plane spanned by the pair, ((:a_(2):),(:beta_(2):))\left(\left\langle a_{2}\right\rangle,\left\langle\beta_{2}\right\rangle\right). In this case it must be that a_(1)^^beta_(1)=Ca_(2)^^beta_(2)a_{1} \wedge \beta_{1}=C a_{2} \wedge \beta_{2}, and hence, a_(1)^^beta_(1)+a_(2)^^beta_(2)=(1+C)a_(1)^^beta_(1)a_{1} \wedge \beta_{1}+a_{2} \wedge \beta_{2}=(1+C) a_{1} \wedge \beta_{1}.
If these two planes are not the same, then they intersect in a line. Let VV be a vector contained in this line. Then by the preceding lemma there are 1-forms gamma\gamma and gamma\gamma such that a_(1)^^beta_(1)=(:V:)^(-1)^^gammaa_{1} \wedge \beta_{1}=\langle V\rangle^{-1} \wedge \gamma and a_(2)^^beta_(2)=(:v:)^(-1)^^gammaa_{2} \wedge \beta_{2}=\langle v\rangle^{-1} \wedge \gamma. Hence,
Now note that any 2 -form is the sum of products of 1 -forms. Hence, this last lemma implies that any 2 -form on T_(p)^(3)T_{p}{ }^{3} is a product of 1 -forms. In other words:
Every 2-form on T_(p)^(3)T_{p}{ }^{3} projects pairs of vectors onto some plane and returns the area of the resulting parallelogram, scaled by some constant.
This fact is precisely why all of classical vector calculus works. We explore this in the next few exercises, and further in Section 7.3.
4.19. Use the above geometric interpretation of the action of a 2 form on T_(p)^(3)T_{p}{ }^{3} to justify the following statement: For every 2-form omega\omega on T_(p)^(3)T_{p}{ }^{3} there are non-zero vectors V_(1)V_{1} and V_(2)V_{2} such that V_(1)V_{1} is not a multiple of V_(2)V_{2}, but omega(V_(1),V_(2))=0\omega\left(V_{1}, V_{2}\right)=0.
4.20. Does Problem 4.19 generalize to higher dimensions?
4.21. Show that if omega\omega is a 2-form on T_(p)^(3)T_{p}{ }^{3}, then there is a line / in T_(p)T_{p}^(3){ }^{3} such that if the plane spanned by V_(1)V_{1} and V_(2)V_{2} contains II, then omega\omega(V_(1),V_(2))=0\left(V_{1}, V_{2}\right)=0.
Note that the conditions of Problem 4.21 are satisfied when the vectors that are perpendicular to both V_(1)V_{1} and V_(2)V_{2} are also perpendicular to ll.
4.22. Show that if all you know about V_(1)V_{1} and V_(2)V_{2} is that they are vectors in T_(p)^(3)T_{p}{ }^{3} that span a parallelogram of area AA, then the value of omega(V_(1),V_(2))\omega\left(V_{1}, V_{2}\right) is maximized when V_(1)V_{1} and V_(2)V_{2} are perpendicular to the line / of Problem 4.21.
Note that the conditions of this exercise are satisfied when the vectors perpendicular to V_(1)V_{1} and V_(2)V_{2} are parallel to II.
4.23. Let NN be a vector perpendicular to V_(1)V_{1} and V_(2)V_{2} in T_(p)^(3)T_{p}{ }^{3} whose length is precisely the area of the parallelogram spanned by these two vectors. Show that there is a vector V_(omega)V_{\omega} in the line / of Problem 4.21 such that the value of omega(V_(1),V_(2))\omega\left(V_{1}, V_{2}\right) is precisely V_(omega)*NV_{\omega} \cdot N.
Remark. You may have learned that the vector NN of the previous exercise is precisely the cross product of V_(1)V_{1} and V_(2)V_{2}. Hence, the previous problem implies that if omega\omega is a 2-form on T_(p)^(3)T_{p}{ }^{3} then there is a vector V_(omega)V_{\omega} such that omega(V_(1),V_(2))=V_(omega)*(V_(1)xxV_(2))\omega\left(V_{1}, V_{2}\right)=V_{\omega} \cdot\left(V_{1} \times V_{2}\right).
4.24. Show that if omega=F_(x)dy^^dz-F_(y)dx^^dz+F_(z)dx^^dy\omega=F_{x} d y \wedge d z-F_{y} d x \wedge d z+F_{z} d x \wedge d y, then VVomega=(:F_(x^('))F_(y^('))F_(z^(')):)\omega=\left\langle F_{x^{\prime}} F_{y^{\prime}} F_{z^{\prime}}\right\rangle.
4.5 2-forms and 3-forms on Tp ^(4){ }^{4} (optional)
Many of the techniques of the previous section can be used to prove results about 2- and 3-forms on T_(p)^(4)T_{p}{ }^{4}.
4.25. Show that any 3 -form on T_(p)^(4)T_{p}{ }^{4} can be written as the product of three 1-forms. (Hint. Two three-dimensional subspaces of T_(p)^(4)T_{p}{ }^{4} must meet in at least a line.)
We now give away an answer to Problem 4.18. Let omega=dx^^dy+\omega=d x \wedge d y+dz^^dwd z \wedge d w. Then an easy computation shows that omega^^omega=2dx^^dy\omega \wedge \omega=2 d x \wedge d y^^dz^^dw\wedge d z \wedge d w. But if omega\omega were equal to a^^betaa \wedge \beta, for some 1 -forms aa and beta\beta, then omega^^omega\omega \wedge \omega would be zero (why?). This argument shows that in general, if omega\omega is any 2 -form such that omega^^omega!=0\omega \wedge \omega \neq 0, then omega\omega cannot be written as the product of 1 -forms.
4.26. Let omega\omega be a 2 -form on T_(p)T_{p}. Show that omega\omega can be written as the sum of exactly two products. That is, omega=a^^beta+delta^^gamma\omega=a \wedge \beta+\delta \wedge \gamma. (Hint. Given three planes in T_(p)^(4)T_{p}{ }^{4} there are at least two of them that intersect in more than a point.)
Above, we saw that if omega\omega is a 2-form such that omega^^omega!=0\omega \wedge \omega \neq 0, then omega\omega is not the product of 1 -forms. We now use the previous exercise to show the converse:
Theorem 1. If omega\omega is a 2 -form on T_(p)T_{p} such that omega^^omega=0\omega \wedge \omega=0, then omega\omega can be written as the product of two 1 -forms.
Our proof of this again relies heavily on the geometry of the situation. By the previous exercise, omega=a^^beta+delta^^gamma\omega=a \wedge \beta+\delta \wedge \gamma. A short computation then shows
If this 4-form is the zero 4-form, then it must be the case that the (4-dimensional) volume of the parallelepiped spanned by (:a:),(:beta:),(:delta:)\langle a\rangle,\langle\beta\rangle,\langle\delta\rangle and (:y:)\langle y\rangle is zero. This, in turn, implies that the plane spanned by (:a:)\langle a\rangle and (:beta:)\langle\beta\rangle meets the plane spanned by (:delta:)\langle\delta\rangle and (:gamma:)\langle\gamma\rangle in at least a line (show this!). Call such an intersection line LL.
As in the previous section, we can now rotate (:a:)\langle a\rangle and (:beta:)\langle\beta\rangle, in the plane they span, to vectors (:a^('):)\left\langle a^{\prime}\right\rangle and (:beta^('):)\left\langle\beta^{\prime}\right\rangle such that (:a^('):)\left\langle a^{\prime}\right\rangle lies in the line LL. The 2 -form a^(')^^beta^(')a^{\prime} \wedge \beta^{\prime} must equal a^^betaa \wedge \beta since they determine the same plane, and span a parallelogram of the same area. Similarly, we rotate (:delta:)\langle\delta\rangle and (:gamma:)\langle\gamma\rangle to vectors (:delta^('):)\left\langle\delta^{\prime}\right\rangle and (:gamma:)\langle\gamma\rangle such that (:delta^('):)sub L\left\langle\delta^{\prime}\right\rangle \subset L. It follows that delta^^gamma=delta^(')^^gamma\delta \wedge \gamma=\delta^{\prime} \wedge \gamma.
Since (:a^('):)\left\langle a^{\prime}\right\rangle and (:delta^('):)\left\langle\delta^{\prime}\right\rangle lie on the same line, there is a constant cc such that ca^(')=delta^(')c a^{\prime}=\delta^{\prime}. We now put all of this information together:
Let's think a little more about our multiplication operator, Lambda\Lambda. If it is really going to be anything like multiplication, we should be able to take three 1-forms, omega,vv\omega, \mathrm{vv} and Psi\Psi, and form the product omega^^vv^^Psi\omega \wedge \vee \wedge \Psi. How can we define this? A first guess might be to say that omega^^vv^^\omega \wedge \vee \wedgePsi=omega^^(vv^^Psi)\Psi=\omega \wedge(\vee \wedge \Psi), but vv^^Psi\vee \wedge \Psi is a 2-form and we have not defined the product of a 2-form and a 1-form. We take a different approach and define omega^^vv^^Psi\omega \wedge \vee \wedge \Psi directly.
This is completely analogous to the previous section. omega,vv\omega, \vee and Psi\Psi each act on a vector, VV, to give three numbers. In other words, they can be thought of as coordinate functions. We say the coordinates of VV are [omega(V),v(V),Psi(V)][\omega(V), v(V), \Psi(V)]. Hence, if we have three vectors, V_(1),V_(2)V_{1}, V_{2} and V_(3)V_{3}, we can compute the [omega,V,Psi][\omega, V, \Psi] coordinates of each. This gives us three new vectors. The signed volume of the parallelepiped which they span is what we define to be the value of omega^^vv^^Psi(V_(1):}\omega \wedge \vee \wedge \Psi\left(V_{1}\right.,V_(2),V_(3), V_{2}, V_{3} ).
There is no reason to stop at three dimensions. Suppose omega_(1),omega_(2)\omega_{1}, \omega_{2}, dots,omega_(n)\ldots, \omega_{n} are 1 -forms and V_(1),V_(2),dots,V_(n)V_{1}, V_{2}, \ldots, V_{n} are vectors. Then we define the value of to be the signed ( nn-dimensional) volume of the parallelepiped spanned by the vectors [omega_(1)(V_(i)),omega_(2)(V_(i)),dots,omega_(n)(V_(i))]\left[\omega_{1}\left(V_{i}\right), \omega_{2}\left(V_{i}\right), \ldots, \omega_{n}\left(V_{i}\right)\right]. Algebraically,
Note that, just as in Problem 4.12, if a,betaa, \beta and gamma\gamma are 1-forms on T_(p)T_{p} 3, then a^^beta^^gamma(V_(1),V_(2),V_(3))a \wedge \beta \wedge \gamma\left(V_{1}, V_{2}, V_{3}\right) is the (signed) volume of the parallelepiped spanned by V_(1),V_(2)V_{1}, V_{2} and V_(3)V_{3} times the volume of the parallelepiped spanned by (:a:),(:beta:)\langle a\rangle,\langle\beta\rangle and (:gamma:)\langle\gamma\rangle.
4.27. Suppose omega\omega is a 2-form on T_(p)^(3)T_{p}{ }^{3} and vv is a 1 -form on T_(p)^(3)T_{p}{ }^{3}. Show that if omega^^v=0\omega \wedge v=0, then there is a 1-form gamma\gamma such that omega=vv^^\omega=\vee \wedgegamma\gamma. (Hint. Combine the above geometric interpretation of a 3-form, which is the product of 1 -forms on T_(p)R^(3)T_{p} R^{3}, with the results of Section 4.4.)
It follows from linear algebra that if we swap any two rows or columns of this matrix, the sign of the result flips. Hence, if the nn tuple, V^(')=(V_(i1),V_(i2),dots,V_(in))\mathbf{V}^{\prime}=\left(V_{i 1}, V_{i 2}, \ldots, V_{i n}\right) is obtained from V=(V_(1),V_(2),dots,V_(n))\mathbf{V}=\left(V_{1}, V_{2}, \ldots, V_{n}\right) by an even number of exchanges, then the sign of omega_(1)^^omega_(2)^^dots^^\omega_{1} \wedge \omega_{2} \wedge \ldots \wedgeomega_(n)(V^('))\omega_{n}\left(\mathbf{V}^{\prime}\right) will be the same as the sign of omega_(1)^^omega_(2)^^dots^^omega_(n)(V)\omega_{1} \wedge \omega_{2} \wedge \ldots \wedge \omega_{n}(\mathbf{V}). If the number of exchanges is odd, then the sign is opposite. We sum this up by saying that the nn-form, omega_(1)^^omega_(2)^^dots^^omega_(n)\omega_{1} \wedge \omega_{2} \wedge \ldots \wedge \omega_{n} is alternating.
The wedge product of 1 -forms is also multilinear, in the following sense: omega_(1)^^omega_(2)^^dots^^omega_(n)(V_(1),dots,V_(i)+V_(i)^('),dots,V_(n))=omega_(1)^^omega_(2)^^dots^^omega_(n)(V_(1),dots,V_(i),dots,V_(n))+\omega_{1} \wedge \omega_{2} \wedge \ldots \wedge \omega_{n}\left(V_{1}, \ldots, V_{i}+V_{i}^{\prime}, \ldots, V_{n}\right)=\omega_{1} \wedge \omega_{2} \wedge \ldots \wedge \omega_{n}\left(V_{1}, \ldots, V_{i}, \ldots, V_{n}\right)+omega_(1)^^omega_(2)^^dots^^omega_(n)(V_(1),dots,V_(i)^('),dots,V_(n))\omega_{1} \wedge \omega_{2} \wedge \ldots \wedge \omega_{n}\left(V_{1}, \ldots, V_{i}^{\prime}, \ldots, V_{n}\right),
and omega_(1)^^omega_(2)^^dots^^omega_(n)(V_(1),dots,cV_(i),dots,V_(n))=comega_(1)^^omega_(2)^^dots^^omega_(n)(V_(1):}\omega_{1} \wedge \omega_{2} \wedge \ldots \wedge \omega_{n}\left(V_{1}, \ldots, c V_{i}, \ldots, V_{n}\right)=c \omega_{1} \wedge \omega_{2} \wedge \ldots \wedge \omega_{n}\left(V_{1}\right., dots,V_(i),dots,V_(n)\ldots, V_{i}, \ldots, V_{n} ), for all ii and any real number, cc.
In general, we define an nn-form to be any alternating, multilinear real-valued function which acts on nn-tuples of vectors.
4.28. Prove the following geometric interpretation: (Hint. All of the steps are completely analogous to those in the last section.)
An m-form on T_(p)^(n)T_{p}{ }^{n} can be thought of as a function which takes the parallelepiped spanned by mm vectors, projects it onto each of the mm dimensional coordinate planes, computes the resulting areas, multiplies each by some constant, and adds the results.
4.29. How many numbers do you need to give to specify a 5 -form on T_(p)^(10)T_{p}{ }^{10} ?
We turn now to the simple case of an nn-form on T_(p)n^(n)T_{p} n^{n}. Notice that there is only one nn-dimensional coordinate plane in this space, namely, the space itself. Such a form, evaluated on an nn-tuple of vectors, must therefore give the nn-dimensional volume of the parallelepiped which it spans, multiplied by some constant. For this reason such a form is called a volume form (in 2-dimensions, an area form).
Example 15. Consider the forms, omega=dx+2dy-dz,v=3dx-dy+\omega=d x+2 d y-d z, \mathrm{v}=3 d x-d y+dzd z and Psi=-dx-3dy+dz\Psi=-d x-3 d y+d z, on T_(p)^(3)T_{p}{ }^{3}. By the above argument omega^^v\omega \wedge v^^Psi\wedge \Psi must be a volume form. But which volume form is it? One way to tell is to compute its value on a set of vectors which we know span a parallelepiped of volume one, namely (:1,0,0:),(:0,1,0:)\langle 1,0,0\rangle,\langle 0,1,0\rangle and (:\langle0,0,1:)0,0,1\rangle. This will tell us how much the form scales volume.
So, omega^^vv^^Psi\omega \wedge \vee \wedge \Psi must be the same as the form 4dx^^dy^^dz4 d x \wedge d y \wedge d z.
4.30. Let omega((:dx,dy,dz:))=dx+5dy-dz,v((:dx,dy,dz:))=2dx-dy\omega(\langle d x, d y, d z\rangle)=d x+5 d y-d z, \mathrm{v}(\langle d x, d y, d z\rangle)=2 d x-d y+dz+d z and y((:dx,dy,dz)=-dx+dy+2dzy(\langle d x, d y, d z)=-d x+d y+2 d z.
If V_(1)=(:1,0,2:),V_(2)=(:1,1,2:)V_{1}=\langle 1,0,2\rangle, V_{2}=\langle 1,1,2\rangle and V_(3)=(:0,2,3:)V_{3}=\langle 0,2,3\rangle, compute omega\omega^^vv^^gamma(V_(1),V_(2),V_(3))\wedge \vee \wedge \gamma\left(V_{1}, V_{2}, V_{3}\right).
Find a constant, cc, such that omega^^vv^^gamma=cdx^^dy^^dz\omega \wedge \vee \wedge \gamma=c d x \wedge d y \wedge d z.
Let a=3dx^^dy+2dy^^dz-dx^^dza=3 d x \wedge d y+2 d y \wedge d z-d x \wedge d z. Find a constant, cc, such that a^^gamma=cdx^^dy^^dza \wedge \gamma=c d x \wedge d y \wedge d z.
4.31. Simplify:
dx^^dy^^dz+dx^^dz^^dy+dy^^dz^^dx+dy^^dx^^dyd x \wedge d y \wedge d z+d x \wedge d z \wedge d y+d y \wedge d z \wedge d x+d y \wedge d x \wedge d y
4.32. Let omega\omega be an nn-form and vv\vee an mm-form.
Use this to show that if nn is odd then omega^^omega=0\omega \wedge \omega=0.
4.7 Algebraic computation of products
In this section, we break with the spirit of the text briefly. At this point, we have amassed enough algebraic identities that multiplying forms becomes similar to multiplying polynomials. We quickly summarize these identities and work a few examples.
Let omega\omega be an nn-form and v be an mm-form. Then we have the following identities
{:[omega^^v=(-1)^(nm)v^^omega],[omega^^omega=0" if "n" is odd "],[omega^^(v+psi)=omega^^v+omega^^psi],[(v+psi)^^omega=v^^omega+psi^^omega]:}\begin{aligned}
\omega \wedge v & =(-1)^{n m} v \wedge \omega \\
\omega \wedge \omega & =0 \text { if } n \text { is odd } \\
\omega \wedge(v+\psi) & =\omega \wedge v+\omega \wedge \psi \\
(v+\psi) \wedge \omega & =v \wedge \omega+\psi \wedge \omega
\end{aligned}
Example 16.
{:[(xdx+ydy)^^(ydx+xdy)=xydx^^dx+x^(2)dx^^dy+y^(2)dy^^dx],[+yxdy^^dy],[=x^(2)dx^^dy+y^(2)dy^^dx],[=x^(2)dx^^dy-y^(2)dx^^dy],[=(x^(2)-y^(2))dx^^dy]:}\begin{aligned}
(x d x+y d y) \wedge(y d x+x d y)= & x y d x \wedge d x+x^{2} d x \wedge d y+y^{2} d y \wedge d x \\
& +y x d y \wedge d y \\
= & x^{2} d x \wedge d y+y^{2} d y \wedge d x \\
= & x^{2} d x \wedge d y-y^{2} d x \wedge d y \\
= & \left(x^{2}-y^{2}\right) d x \wedge d y
\end{aligned}
Example 17.
{:[(xdx+ydy)^^(xzdx^^dz+yzdy^^dz)],[=x^(2)zdx^^dx^^dz+xyzdx^^dy^^dz],[+yxzdy^^dx^^dz+y^(2)zdy^^dy^^dz],[=xyzdx^^dy^^dz+yxzdy^^dx^^dz],[=xyzdx^^dy^^dz-xyzdx^^dy^^dz],[=0.]:}\begin{aligned}
(x d x+y d y) \wedge & (x z d x \wedge d z+y z d y \wedge d z) \\
= & x^{2} z d x \wedge d x \wedge d z+x y z d x \wedge d y \wedge d z \\
& +y x z d y \wedge d x \wedge d z+y^{2} z d y \wedge d y \wedge d z \\
= & x y z d x \wedge d y \wedge d z+y x z d y \wedge d x \wedge d z \\
= & x y z d x \wedge d y \wedge d z-x y z d x \wedge d y \wedge d z \\
= & 0 .
\end{aligned}
4.33. Expand and simplify:
[(x-y)dx+(x+y)dy+zdz]^^[(x-y)dx+(x+y)dy][(x-y) d x+(x+y) d y+z d z] \wedge[(x-y) d x+(x+y) d y].
(2dx+3dy)^^(dx-dz)^^(dx+dy+dz)(2 d x+3 d y) \wedge(d x-d z) \wedge(d x+d y+d z).
5
Differential Forms
5.1 Families of forms
Let us now go back to the example in Chapter 3. In the last section of that chapter, we showed that the integral of a function, f:^(3)rarrf:{ }^{3} \rightarrowRR, over a surface parameterized by varphi:R subR^(2)\varphi: R \subset \mathbb{R}^{2} is
int_(R)f(phi(r,theta))" Area "[(del phi)/(del r)(r,theta),(del phi)/(del theta)(r,theta)]drd theta.\int_{R} f(\phi(r, \theta)) \text { Area }\left[\frac{\partial \phi}{\partial r}(r, \theta), \frac{\partial \phi}{\partial \theta}(r, \theta)\right] d r d \theta .
This gave one motivation for studying differential forms. We wanted to generalize this integral by considering functions other than "Area (*,**)(\cdot, \cdot \cdot) " that eat pairs of vectors and return numbers. But in this integral the point at which such a pair of vectors is based, changes. In other words, Area (*,*)(\cdot, \cdot) does not act on T_(p)^(3)xxT_(p)^(3)T_{p}{ }^{3} \times T_{p}{ }^{3} for a fixed pp. We can make this point a little clearer by re-examining the above integrand. Note that it is of the form f(**)f(*) Area( (*,*)(\cdot, \cdot). For a fixed point, ***, of ^(3){ }^{3}, this is an operator on T_(**)^(3)xxT_(**)^(3)T_{*}{ }^{3} \times T_{*}{ }^{3}, much like a 2 -form is.
But so far all we have done is to define 2 -forms at fixed points of ^(3){ }^{3}. To really generalize the above integral we must start to consider entire families of 2 -forms, omega_(p):T_(p)xxT_(p)^(3)rarr\omega_{p}: T_{p} \times T_{p}{ }^{3} \rightarrow, where pp ranges over all of ^(3){ }^{3}. Of course, for this to be useful such a family must have some "niceness" properties. For one thing, it should be continuous. That is, if pp and qq are close, then omega_(p)\omega_{p} and omega_(q)\omega_{q} should be similar.
An even stronger property is that the family, omega_(p,)\omega_{p,} is differentiable. To see what this means recall that for a fixed pp, a 2 -form omega_(p)\omega_{p} can always be written as a_(p)dx^^dy+b_(p)dy^^dz+c_(p)dx^^dza_{p} d x \wedge d y+b_{p} d y \wedge d z+c_{p} d x \wedge d z, where a_(p)a_{p}, b_(p)b_{p}, and c_(p)c_{p} are constants. But if we let our choice of pp vary over all of ^(3){ }^{3}, then so will these constants. In other words, a_(p),b_(p)a_{p}, b_{p} and c_(p)c_{p} are all functions from R^(3)\mathbb{R}^{3} to . To say that the family, omega_(p)\omega_{p} is differentiable we mean that each of these functions is differentiable. If omega_(p)\omega_{p} is differentiable, then we will refer to it as a differential form. When there can be no confusion we will suppress the subscript, pp. Example 18. omega=x^(2)ydx^^dy-xzdy^^dz\omega=x^{2} y d x \wedge d y-x z d y \wedge d z is a differential 2-form on ^(3){ }^{3}. On the space T_((1,2,3))^(3)T_{(1,2,3)}{ }^{3} it is just the 2-form 2dx^^dy-3dy^^dz2 d x \wedge d y-3 d y \wedge d z. We will denote vectors in T_((1,2,3))^(3)T_{(1,2,3)}{ }^{3} as (:dx,dy,dz:)_((1,2,3))\langle d x, d y, d z\rangle_{(1,2,3)}. Hence, the value of omega((:4,0,-1:)_((1,2,3)),(3,1,2:)_((1,2,3)))\omega\left(\langle 4,0,-1\rangle_{(1,2,3)},(3,1,2\rangle_{(1,2,3)}\right) is the same as the 2 -form, 2dx^^dy+dy^^dz2 d x \wedge d y+d y \wedge d z, evaluated on the vectors (:4,0,-1:)\langle 4,0,-1\rangle and (:3,1,2:)\langle 3,1,2\rangle, which we compute:
{:[omega((:4,0,-1:)_((1,2,3)):}{:,(:3,1,2:)_((1,2,3)))],[=2dx^^dy-3dy^^dz((:4","0","-1:)","(:3","1","2:))],[=2|[4,3],[0,1]|-3|[0,1],[-1,2]|=5]:}\begin{aligned}
\omega\left(\langle 4,0,-1\rangle_{(1,2,3)}\right. & \left.,\langle 3,1,2\rangle_{(1,2,3)}\right) \\
& =2 d x \wedge d y-3 d y \wedge d z(\langle 4,0,-1\rangle,\langle 3,1,2\rangle) \\
& =2\left|\begin{array}{ll}
4 & 3 \\
0 & 1
\end{array}\right|-3\left|\begin{array}{rr}
0 & 1 \\
-1 & 2
\end{array}\right|=5
\end{aligned}
Suppose omega\omega is a differential 2-form on . What does omega\omega act on? It takes a pair of vectors at each point of ^(3){ }^{3} and returns a number. In other words, it takes two vector fields and returns a function from ^(3){ }^{3} to . A vector field is simply a choice of vector in T_(p)^(3)T_{p}{ }^{3}, for each ppinR^(3)\in \mathbb{R}^{3}. In general, a differential nn-form on R^(m)\mathbb{R}^{m} acts on nn vector fields to produce a function from ^(m){ }^{m} to (see Fig. 5.1).
Fig. 5.1. A differential 2-form, omega\omega, acts on a pair of vector fields, and returns a function from ^(n){ }^{n} to
Example 19. V_(1)=(:2y,0,-x:)_((x,y,z))V_{1}=\langle 2 y, 0,-x\rangle_{(x, y, z)} is a vector field on R^(3)R^{3}. For example, it contains the vector (:4,0,-1:)inT_((1,2,3))^(3)\langle 4,0,-1\rangle \in T_{(1,2,3)}{ }^{3}. If V_(2)=(:z,1V_{2}=\langle z, 1, xy:)_((x,y,z))x y\rangle_{(x, y, z)} and omega\omega is the differential 2-form, x^(2)ydx^^dy-xzdy^^dzx^{2} y d x \wedge d y-x z d y \wedge d z, then which is a function from ^(3){ }^{3} to
{:[omega(V_(1),V_(2))=x^(2)ydx^^dy-xzdy^^dz((:2y,0,x:)_((x,y,z)),(:z,1,xy:)_((x,y,z)))],[=x^(2)y|[2y,z],[0,1]|-xz|[0,1],[-x,xy]|=2x^(2)y^(2)-x^(2)z]:}\begin{aligned}
\omega\left(V_{1}, V_{2}\right) & =x^{2} y d x \wedge d y-x z d y \wedge d z\left(\langle 2 y, 0, x\rangle_{(x, y, z)},\langle z, 1, x y\rangle_{(x, y, z)}\right) \\
& =x^{2} y\left|\begin{array}{rr}
2 y & z \\
0 & 1
\end{array}\right|-x z\left|\begin{array}{rr}
0 & 1 \\
-x & x y
\end{array}\right|=2 x^{2} y^{2}-x^{2} z
\end{aligned}
Notice that V_(2)V_{2} contains the vector (:3,1,2:)_((1,2,3))\langle 3,1,2\rangle_{(1,2,3)}. So, from the previous example we would expect that 2x^(2)y^(2)-x^(2)z2 x^{2} y^{2}-x^{2} z equals 5 at the point ( 1,2,31,2,3 ), which is indeed the case.
5.1. Let omega\omega be the differential 2 -form on ^(3){ }^{3} given by
omega=xyzdx^^dy+x^(2)zdy^^dz-ydx^^dz\omega=x y z d x \wedge d y+x^{2} z d y \wedge d z-y d x \wedge d z
Let V_(1)V_{1} and V_(2)V_{2} be the following vector fields:
V_(1)=(:y,z,x^(2):)_((x,y,z)),V_(2)=(:xy,xz,y:)_((x,y,z)).V_{1}=\left\langle y, z, x^{2}\right\rangle_{(x, y, z)}, V_{2}=\langle x y, x z, y\rangle_{(x, y, z)} .
What vectors do V_(1)V_{1} and V_(2)V_{2} contain at the point (1,2,3)(1,2,3) ?
Which 2-form is omega\omega on T_((1,2,3))^(3)T_{(1,2,3)}{ }^{3} ?
Use your answers to the previous two questions to compute omega(V_(1),V_(2))\omega\left(V_{1}, V_{2}\right) at the point (1,2,3)(1,2,3).
Compute omega(V_(1),V_(2))\omega\left(V_{1}, V_{2}\right) at the point (x,y,z)(x, y, z). Then plug in x=1x=1, y=2y=2 and z=3z=3 to check your answer against the previous question.
5.2 Integrating differential 2-forms
Let's now recall the steps involved with integration of functions on subsets of ^(2){ }^{2}, which we learned in Section 1.3. Suppose R sub^(2)R \subset{ }^{2} and f:R rarrf: R \rightarrow. The following steps define the integral of ff over RR :
Choose a lattice of points in R,{(x_(i),y_(j))}R,\left\{\left(x_{i}, y_{j}\right)\right\}.
For each i,ji, j define V_(i,j)=(x_(i+1),y_(j))-(x_(i),y_(j))V_{i, j}=\left(x_{i+1}, y_{j}\right)-\left(x_{i}, y_{j}\right) and V_(i,j=)^(2)=(x_(i):}V_{i, j=}^{2}=\left(x_{i}\right., {:y_(j+1))-(x_(i,)y_(j))\left.y_{j+1}\right)-\left(x_{i,} y_{j}\right) (See Fig. 5.2). Notice that V^(1)_(i,j)V^{1}{ }_{i, j} and V^(2)_(i,j)V^{2}{ }_{i, j} are both vectors in T_((xi,yj))^(2)T_{(x i, y j)}{ }^{2}.
For each i,ji, j compute f(x_(i,)y_(j))f\left(x_{i,} y_{j}\right) Area (V_(i,jr)^(1)V_(i,j)^(2))\left(V_{i, j r}^{1} V_{i, j}^{2}\right), where Area (V(V, W)W) is the function which returns the area of the parallelogram spanned by the vectors VV and WW.
Sum over all ii and jj.
Take the limit as the maximal distance between adjacent lattice points goes to zero. This is the number that we define to be the value of int_(R)fdxdy\int_{R} f d x d y.
Let's focus on Step 3. Here we compute f(x_(i,)y_(j))f\left(x_{i,} y_{j}\right) Area (V^(1)_(i,j),V^(2)_(i,j))\left(V^{1}{ }_{i, j}, V^{2}{ }_{i, j}\right). Notice that this is exactly the value of the differential 2 -form omega=f\omega=f(x,y)dx^^dy(x, y) d x \wedge d y, evaluated on the vectors V_(i,j)^(1)V_{i, j}^{1} and V^(2)_(i,j)V^{2}{ }_{i, j} at the point ( x_(i)x_{i}, {:y_(j))\left.y_{j}\right). Hence, in Step 4 we can write this sum as sum,sum_(j)f(x_(i,)y_(j))\sum, \sum_{j} f\left(x_{i,} y_{j}\right) Area (V^(1)_(i,j),V_(i,j)^(2))=sum i sum j omega(x_(i,)y_(j))(V^(1)_(i,j),V_(i,j)^(2))\left(V^{1}{ }_{i, j}, V_{i, j}^{2}\right)=\sum i \sum j \omega\left(x_{i,} y_{j}\right)\left(V^{1}{ }_{i, j}, V_{i, j}^{2}\right). It is reasonable, then o dopt the shorthand " int_(R)omega\int_{R} \omega " to denote the limit in Step 5.
The upshot of all this is the following:
" If "omega=f(x,y)dx^^dy," then "int_(R)omega=int_(R)fdxdy.\text { If } \omega=f(x, y) d x \wedge d y, \text { then } \int_{R} \omega=\int_{R} f d x d y .
Fig. 5.2. The steps toward integration.
Since all differential 2-forms on ^(2){ }^{2} are of the form f(x,y)dx^^dyf(x, y) d x \wedge d y we now know how to integrate them.
CAUTION! When integrating 2 -forms on 2 it is tempting to always drop the " Lambda\Lambda " and forget you have a differential form. This is only valid with dx^^dyd x \wedge d y. It is NOT valid with dy^^dxd y \wedge d x. This may seem a bit curious since Fubini's theorem gives us
int fdx^^dy=int fdxdy=int fdydx.\int f d x \wedge d y=\int f d x d y=\int f d y d x .
All of these are equal to - int fdy^^dx\int f d y \wedge d x. We will revisit this issue in Example 27.
5.2. Let omega=xy^(2)dx^^dy\omega=x y^{2} d x \wedge d y e a differential 2-form on 2. Let DD be the region of ^(2){ }^{2} where 0 <= x <= 10 \leq x \leq 1 and 0 <= y <= 10 \leq y \leq 1. Calculate int_(D)omega\int_{D} \omega.
What about integration of differential 2-forms on ^(3){ }^{3} ? As remarked at the end of Section 3.5 we do this only over those subsets of ^(3){ }^{3} which can be parameterized by subsets of ^(2){ }^{2}. Suppose MM is such a subset, like the top half of the unit sphere. To define what we mean by int omega\int \omega we just follow the steps above: MM
Choose a lattice of points in M_(,){p_(i,j)}M_{,}\left\{p_{i, j}\right\}.
For each i,ji, j define V_(i,j)^(1)=p_(i+1,j)-p_(i,j)V_{i, j}^{1}=p_{i+1, j}-p_{i, j} and V_(i,j)^(2)=p_(i,j+1)-p_(i,j)V_{i, j}^{2}=p_{i, j+1}-p_{i, j}. Notice that V_(i,j)^(1)V_{i, j}^{1} and V_(i,j)^(2)V_{i, j}^{2} are both vectors in T_(pi,j)^(3)T_{p i, j}{ }^{3} (see Fig. 5.3).
For each i,ji, j compute omegap_(i,j)(V_(i,j)^(1)V_(i,j)^(2))\omega p_{i, j}\left(V_{i, j}^{1} V_{i, j}^{2}\right).
Sum over all ii and jj.
Take the limit as the maximal distance between adjacent lattice points goes to 0 . This is the number that we define to be the value of int omega\int \omega. MM
Fig. 5.3. The steps toward integrating a 2 -form.
Unfortunately these steps are not so easy to follow. For one thing, it is not always clear how to pick the lattice in Step 1. In fact, there is an even worse prob lem. In Step 3, why did we compute omegap_(i,j)\omega p_{i, j}(V_(i,j)^(1)V_(i,j)^(2))\left(V_{i, j}^{1} V_{i, j}^{2}\right) instead of omegap_(i,j)(V_(i,j)^(2)V_(i,j)^(1))\omega p_{i, j}\left(V_{i, j}^{2} V_{i, j}^{1}\right) ? After all, V_(i,j)^(1)V_{i, j}^{1} and V_(i,j)^(2)V_{i, j}^{2} are two randomly oriented vectors in 7^(3)_(pi,j)7^{3}{ }_{p i, j}. There is no reasonable way to decide which should be first and which second. There is nothing to be done about this. At some point we just have to make a choice and make it clear which choice we have made. Such a decision is called an orientation. We will have much more to say about this later. For now, we simply note that a different choice will only change our answer by changing its sign.
While we are on this topic, we also note that we would end up with the same number in Step 5 if we had calculated omegap_(i,j)(-V^(1)_(i,j)-:}\omega p_{i, j}\left(-V^{1}{ }_{i, j}-\right.V_(i,j)^(2)V_{i, j}^{2} ) in Step 4, instead. Similarly, if it turns out later that we should have calculated omega_(pi,j)(V_(i,j)^(2),V_(i,j)^(1))\omega_{p i, j}\left(V_{i, j}^{2}, V_{i, j}^{1}\right), then we could have also arrived at the right answer by computing omega_(pi,j)(-V^(1)_(i,j)V^(2)_(i,j))\omega_{p i, j}\left(-V^{1}{ }_{i, j} V^{2}{ }_{i, j}\right). In other words, there are really only two possibilities: either omega_(pi,j)(V_(i,j)^(1)V_(i,j)^(2))\omega_{p i, j}\left(V_{i, j}^{1} V_{i, j}^{2}\right) gives the correct answer or omega_("pi,j ")(-V_(i,j)^(1),V_(i,j)^(2))\omega_{\text {pi,j }}\left(-V_{i, j}^{1}, V_{i, j}^{2}\right) does. Which one will depend on our choice of orientation.
Despite all the difficulties with using the above definition of int omega\int \omega, all hope MM is not lost. Remember that we are only integrating over regions which can be parameterized by subsets of R^(2)R^{2}. The trick is to use such a parameterization to translate the problem into an integral of a 2 -form over a region in R^(2)\mathrm{R}^{2}. The steps are analogous to those in Section 3.5.
Suppose varphi:R subR^(2)rarr M\varphi: R \subset R^{2} \rightarrow M is a parameterization. We want to find a 2-form, f(x,y)dx^^dyf(x, y) d x \wedge d y, such that a Riemann sum for this 2 -form over RR gives the same result as a Riemann sum for omega\omega over MM. Let's begin:
Choose a rectangular lattice of points in R,{(x_(i),y_(j))}R,\left\{\left(x_{i}, y_{j}\right)\right\}. This also gives a lattice, {Phi(x_(i),y_(j))}\left\{\Phi\left(x_{i}, y_{j}\right)\right\}, in MM.
For each i,ji, j, define V^(1)_(i,j)=(x_(i+1),y_(j))-(x_(i,)y_(j)),V^(2)_(i,j)=(x_(i,)y_(j+1))-V^{1}{ }_{i, j}=\left(x_{i+1}, y_{j}\right)-\left(x_{i,} y_{j}\right), V^{2}{ }_{i, j}=\left(x_{i,} y_{j+1}\right)-(x_(i),y_(j)),v^(1)_(i,j)=varphi(x_(i+1),y_(j))-varphi(x_(i),y_(j))\left(x_{i}, y_{j}\right), \mathrm{v}^{1}{ }_{i, j}=\varphi\left(x_{i+1}, y_{j}\right)-\varphi\left(x_{i}, y_{j}\right), and v^(2)_(i,j)=varphi(x_(i),y_(j+1))-varphi\mathrm{v}^{2}{ }_{i, j}=\varphi\left(x_{i}, y_{j+1}\right)-\varphi(x_(i,)y_(j))\left(x_{i,} y_{j}\right) (see Fig. 5.4). Notice that V^(1)_(i,j)V^{1}{ }_{i, j} and V^(2)_(i,j)V^{2}{ }_{i, j} are vectors in T_((xi,yj))R^(2)T_{(x i, y j)} \mathrm{R}^{2} and v^(1)_(i,j)\mathrm{v}^{1}{ }_{i, j} and v^(2)_(i,j)\mathrm{v}^{2}{ }_{i, j} are vectors in Tvarphi_((xi,y))R^(3)T \varphi_{(x i, y)} \mathrm{R}^{3}.
For each i,ji, j compute f(x_(i,)y_(j))dx^^dy(V_(i,j,)^(1)V_(i,j)^(2))f\left(x_{i,} y_{j}\right) d x \wedge d y\left(V_{i, j,}^{1} V_{i, j}^{2}\right) and omegavarphi_((xi,yj))\omega \varphi_{(x i, y j)}(V^(1)_(i,j),v^(2)_(i,j))\left(\mathrm{V}^{1}{ }_{i, j}, \mathrm{v}^{2}{ }_{i, j}\right).
Sum over all ii and jj.
Fig. 5.4. Using varphi\varphi to integrate a 2 -form.
At the conclusion of Step 4 we have two sums, Sigma_(i)Sigma_(j)f(X_(i),y_(j))dx^^\Sigma_{i} \Sigma_{j} f\left(X_{i}, y_{j}\right) d x \wedgedy(V^(1)_(i,j)V^(2)_(i,j))d y\left(V^{1}{ }_{i, j} V^{2}{ }_{i, j}\right) and Sigma_(i)Sigma_(j)omega varphi(xi,yj)(V^(1)_(i,j)v^(2)_(i,j))\Sigma_{i} \Sigma_{j} \omega \varphi(x i, y j)\left(V^{1}{ }_{i, j} v^{2}{ }_{i, j}\right). In order for these to be equal, we must have:
f(x_(i),y_(j))dx^^dy(V_(i,j)^(1),V_(i,j)^(2))=omega_(phi(x_(i),y_(j)))(V_(i,j)^(1),V_(i,j)^(2))f\left(x_{i}, y_{j}\right) d x \wedge d y\left(V_{i, j}^{1}, V_{i, j}^{2}\right)=\omega_{\phi\left(x_{i}, y_{j}\right)}\left(\mathcal{V}_{i, j}^{1}, V_{i, j}^{2}\right)
And so,
f(x_(i),y_(j))=(omega_(phi(x_(i),y_(j)))(V_(i,j)^(1),nu_(i,j)^(2)))/(dx^^dy(V_(i,j)^(1),V_(i,j)^(2))).f\left(x_{i}, y_{j}\right)=\frac{\omega_{\phi\left(x_{i}, y_{j}\right)}\left(\mathcal{V}_{i, j}^{1}, \nu_{i, j}^{2}\right)}{d x \wedge d y\left(V_{i, j}^{1}, V_{i, j}^{2}\right)} .
But, since we are using a rectangular lattice in RR we know dx^^dyd x \wedge d y(V_(i,j)^(1)V_(i,j)^(2))=Area(V^(1)_(i,j)V_(i,j)^(2))=|V_(i,j)^(1)|*|V^(2)_(i,j)|\left(V_{i, j}^{1} V_{i, j}^{2}\right)=\operatorname{Area}\left(V^{1}{ }_{i, j} V_{i, j}^{2}\right)=\left|V_{i, j}^{1}\right| \cdot\left|V^{2}{ }_{i, j}\right|. We now have
Let's summarize what we have so far. We defined f(x,y)f(x, y) so that
{:[sum_(i)sum_(j)omega_(phi(x_(i),y_(j)))(V_(i,j)^(1),V_(i,j)^(2))=sum_(i)sum_(j)f(x_(i),y_(j))dx^^dy(V_(i,j)^(1),V_(i,j)^(2))],[=sum_(i)sum_(j)omega_(phi(x_(i),y_(j)))((nu_(i,j)^(1))/(|V_(i,j)^(1)|),(V_(i,j)^(2))/(|V_(i,j)^(2)|))dx^^dy(V_(i,j)^(1),V_(i,j)^(2)).]:}\begin{gathered}
\sum_{i} \sum_{j} \omega_{\phi\left(x_{i}, y_{j}\right)}\left(\mathcal{V}_{i, j}^{1}, V_{i, j}^{2}\right)=\sum_{i} \sum_{j} f\left(x_{i}, y_{j}\right) d x \wedge d y\left(V_{i, j}^{1}, V_{i, j}^{2}\right) \\
=\sum_{i} \sum_{j} \omega_{\phi\left(x_{i}, y_{j}\right)}\left(\frac{\nu_{i, j}^{1}}{\left|V_{i, j}^{1}\right|}, \frac{V_{i, j}^{2}}{\left|V_{i, j}^{2}\right|}\right) d x \wedge d y\left(V_{i, j}^{1}, V_{i, j}^{2}\right) .
\end{gathered}
We have also shown that when we take the limit as the distance between adjacent partition points tends toward zero this sum
converges to the sum
sum_(i)sum_(j)omega_(phi(x,y))((del phi)/(del x)(x,y),(del phi)/(del y)(x,y))dx^^dy(V_(i,j)^(1),V_(i,j)^(2)).\sum_{i} \sum_{j} \omega_{\phi(x, y)}\left(\frac{\partial \phi}{\partial x}(x, y), \frac{\partial \phi}{\partial y}(x, y)\right) d x \wedge d y\left(V_{i, j}^{1}, V_{i, j}^{2}\right) .
Hence, it must be that
{:(5.1)int_(M)omega=int_(R)omega_(phi(x,y))((del phi)/(del x)(x,y),(del phi)/(del y)(x,y))dx^^dy.:}\begin{equation*}
\int_{M} \omega=\int_{R} \omega_{\phi(x, y)}\left(\frac{\partial \phi}{\partial x}(x, y), \frac{\partial \phi}{\partial y}(x, y)\right) d x \wedge d y . \tag{5.1}
\end{equation*}
At first glance, this seems like a very complicated formula. Let's break it down by examining the integrand on the right. The most important thing to notice is that this is just a differential 2 -form on RR, even though omega\omega is a 2 -form on R^(3)\mathrm{R}^{3}. For each pair of numbers, (x(x, yy ), the function ^(omega_(phi(x,y))((del phi)/(del x)(x,y),(del phi)/(del y)(x,y))_("just ")" returns some real "){ }^{\omega_{\phi(x, y)}\left(\frac{\partial \phi}{\partial x}(x, y), \frac{\partial \phi}{\partial y}(x, y)\right)_{\text {just }} \text { returns some real }} number. Hence, the entire integrand s of the form gdx^^dyg d x \wedge d y, where g:R rarrRg: R \rightarrow \mathrm{R}.
The only way to really convince oneself of the usefulness of this formula is to actually use it.
Example 20. Let MM denote the top half of the unit sphere in R^(3)\mathrm{R}^{3}. Let omega=z^(2)dx^^dy\omega=z^{2} d x \wedge d y be a differential 2-form on R^(3)\mathrm{R}^{3}. Calculating int_(M)omega\int_{M} \omega directly by setting up a Riemann sum would be next to impossible. So we employ the parameterization Phi(r,t)=(r cos t,r sin t,sqrt()1-r^(2))\Phi(r, t)=\left(r \cos t, r \sin t, \sqrt{ } 1-r^{2}\right), where 0 <= t <= 2pi0 \leq t \leq 2 \pi and 0 <= r <= 10 \leq r \leq 1.
{:[int_(M)omega=int_(R)omega_(phi(r,t))((del phi)/(del r)(r,t),(del phi)/(del t)(r,t))dr^^dt],[=int_(R)omega_(phi(r,t))((:cos t,sin t,(-r)/(sqrt(1-r^(2))):),(:-r sin t,r cos t,0:))dr^^dt],[=int_(R)(1-r^(2))|[cos t-r sin t],[sin t quad r cos t]|dr^^dt],[=int_(R)(1-r^(2))(r)dr^^dt],[=int_(0)^(2pi)int_(0)^(1)r-r^(3)drdt=(pi)/(2).]:}\begin{aligned}
\int_{M} \omega & =\int_{R} \omega_{\phi(r, t)}\left(\frac{\partial \phi}{\partial r}(r, t), \frac{\partial \phi}{\partial t}(r, t)\right) d r \wedge d t \\
& =\int_{R} \omega_{\phi(r, t)}\left(\left\langle\cos t, \sin t, \frac{-r}{\sqrt{1-r^{2}}}\right\rangle,\langle-r \sin t, r \cos t, 0\rangle\right) d r \wedge d t \\
& =\int_{R}\left(1-r^{2}\right)\left|\begin{array}{cc}
\cos t-r \sin t \\
\sin t \quad r \cos t
\end{array}\right| d r \wedge d t \\
& =\int_{R}\left(1-r^{2}\right)(r) d r \wedge d t \\
& =\int_{0}^{2 \pi} \int_{0}^{1} r-r^{3} d r d t=\frac{\pi}{2} .
\end{aligned}
Notice that as promised, the term ^(omega_(phi(r,t)))((del phi)/(del r)(r,t),(del phi)/(del t)(r,t)){ }^{\omega_{\phi(r, t)}}\left(\frac{\partial \phi}{\partial r}(r, t), \frac{\partial \phi}{\partial t}(r, t)\right) in the second integral above simplified to a function from R@R,r-r^(3)R \circ R, r-r^{3}.
5.3. Integrate the 2 -form
omega=(1)/(x)dy^^dz-(1)/(y)dx^^dz\omega=\frac{1}{x} d y \wedge d z-\frac{1}{y} d x \wedge d z
over the top half of the unit sphere using the following parameterizations from cylindrical and spherical coordinates:
(r,theta)rarr(r cos theta,r sin theta,sqrt()1-r^(2))(r, \theta) \rightarrow\left(r \cos \theta, r \sin \theta, \sqrt{ } 1-r^{2}\right), where 0 <= theta <= 2pi0 \leq \theta \leq 2 \pi and 0 <= r0 \leq r<= 1\leq 1.
(theta,varphi)rarr(sin varphi cos theta,sin varphi sin theta,cos varphi)(\theta, \varphi) \rightarrow(\sin \varphi \cos \theta, \sin \varphi \sin \theta, \cos \varphi), where 0 <= theta <= 2pi0 \leq \theta \leq 2 \pi and 0 <= phi <= (pi)/(2)0 \leq \phi \leq \frac{\pi}{2}
5.4. Let omega\omega be the 2 -form from the previous problem. Integrate omega\omega over the surface parameterized by the following:
phi(r,theta)=(r cos theta,r sin theta,cos r),0 <= r <= (pi)/(2),0 <= theta <= 2pi.\phi(r, \theta)=(r \cos \theta, r \sin \theta, \cos r), 0 \leq r \leq \frac{\pi}{2}, 0 \leq \theta \leq 2 \pi .
5.5. Let SS be the surface in R^(3)\mathrm{R}^{3} parameterized by
where 0 <= theta <= pi0 \leq \theta \leq \pi and 0 <= z <= 10 \leq z \leq 1. Let omega=xyzdy^^dz\omega=x y z d y \wedge d z. Calculates int^(int)omega\int^{\int} \omega.
5.6. Let omega\omega be the differential 2 -form on R^(3)R^{3} given by
omega=xyzdx^^dy+x^(2)zdy^^dz-ydx^^dz\omega=x y z d x \wedge d y+x^{2} z d y \wedge d z-y d x \wedge d z
Let PP be the portion of the plane 3=2x+3y-z3=2 x+3 y-z in R^(3)\mathrm{R}^{3} that lies above the square {(x,y)∣0 <= x <= 1,0 <= y <= 1}\{(x, y) \mid 0 \leq x \leq 1,0 \leq y \leq 1\}. Calculate int_(P)omega\int_{P} \omega.
Let MM be the portion of the graph of z=x^(2)+yz=x^{2}+y in R^(3)\mathrm{R}^{3} that lies above the rectangle {(x,y)∣0 <= x <= 1,0 <= y <= 2}\{(x, y) \mid 0 \leq x \leq 1,0 \leq y \leq 2\}. Calculate int_(M)omega.M\int_{M} \omega . M
5.7. Let DD be some region in the xyx y-plane. Let MM denote the portion of the graph of z=g(x,y)z=g(x, y) that lies above DD.
Let omega=f(x,y)dx^^dy\omega=f(x, y) d x \wedge d y be a differential 2-form on R^(3)\mathrm{R}^{3}. Show that
int_(M)omega=int_(D)f(x,y)dxdy.\int_{M} \omega=\int_{D} f(x, y) d x d y .
Notice the answer does not depend on the function g(x,y)g(x, y). 2. Now suppose omega=f(x,y,z)dx^^dy\omega=f(x, y, z) d x \wedge d y. Show that
int_(M)omega=int_(D)f(x,y,g(x,y))dxdy\int_{M} \omega=\int_{D} f(x, y, g(x, y)) d x d y
5.8. Let SS be the surface obtained from the graph of z=f(x)=x^(3)z=f(x)=x^{3}, where 0 <= x <= 10 \leq x \leq 1, by rotating around the zz-axis. Integrate the 2 -form omega=ydx^^dz\omega=y d x \wedge d z over SS. (Hint. use cylindrical coordinates to parameterize SS.)
5.3 Orientations
What would have happened in Example 20 if we had used the parameterization varphi^(')(r,t)=(-r cos t,r sin t,sqrt()1-r^(2))\varphi^{\prime}(r, t)=\left(-r \cos t, r \sin t, \sqrt{ } 1-r^{2}\right) instead? We leave it to the reader to check that we end up with the answer -pi//2-\pi / 2 rather than pi//2\pi / 2. This is a problem. We defined int omega\int \omega before we started talking about parameterizations. Hence, the value MM which we calculate for this integral should not depend on our choice of parameterization. So what happened?
To analyze this completely, we need to go back to the definition of int omega\int \omega from MM the previous section. We noted at the time that a choice was made to calculate omegap_(i,j)(V_(i,j)^(1)V_(i,j)^(2))\omega p_{i, j}\left(V_{i, j}^{1} V_{i, j}^{2}\right) instead of omegap_(i,j)(-V_(i,j)^(1)V_(i,j)^(2))\omega p_{i, j}\left(-V_{i, j}^{1} V_{i, j}^{2}\right). But was this choice correct? The answer is a resounding maybe! We are actually missing enough information to tell. An orientation is precisely some piece of information about MM which we can use to make the right choice. This way we can tell a friend what MM is, what omega\omega is, and what the orientation on MM is, and they are sure to get the same answer. Recall Equation 5.1:
int_(M)omega=int_(R)omega_(phi(x,y))((del phi)/(del x)(x,y),(del phi)/(del y)(x,y))dx^^dy.\int_{M} \omega=\int_{R} \omega_{\phi(x, y)}\left(\frac{\partial \phi}{\partial x}(x, y), \frac{\partial \phi}{\partial y}(x, y)\right) d x \wedge d y .
Depending on the specified orientation of MM, it may be incorrect to use Equation 5.1. Sometimes we may want to use:
int_(M)omega=int_(R)omega_(phi(x,y))(-(del phi)/(del x)(x,y),(del phi)/(del y)(x,y))dx^^dy.\int_{M} \omega=\int_{R} \omega_{\phi(x, y)}\left(-\frac{\partial \phi}{\partial x}(x, y), \frac{\partial \phi}{\partial y}(x, y)\right) d x \wedge d y .
Both omega\omega and int\int are linear. This just means the negative sign in the integrand on the right can go all the way outside. Hence, we can write both this equation and Equation 5.1 as
{:(5.2)int_(M)omega=+-int_(R)omega_(phi(x,y))((del phi)/(del x)(x,y),(del phi)/(del y)(x,y))dx^^dy.:}\begin{equation*}
\int_{M} \omega= \pm \int_{R} \omega_{\phi(x, y)}\left(\frac{\partial \phi}{\partial x}(x, y), \frac{\partial \phi}{\partial y}(x, y)\right) d x \wedge d y . \tag{5.2}
\end{equation*}
We define an orientation on MM to be any piece of information that can be used to decide, for each choice of parameterization varphi\varphi, whether to use the " + " or "-" sign in Equation 5.2, so that the integral will always yield the same answer.
We will see several ways to specify an orientation on MM. The first will be geometric. It has the advantage that it can be easily visualized, but the disadvantage that it is actually much harder to use in calculations. All we do is draw a small circle on MM with an arrowhead on it. To use this "oriented circle" to tell if we need the " + " or "-" sign in Equation 5.2, we draw the vectors (del phi)/(del x)(x,y)\frac{\partial \phi}{\partial x}(x, y) and (del phi)/(del y)(x,y)\frac{\partial \phi}{\partial y}(x, y) and an arc with an arrow from the first to the second. If the direction of this arrow agrees with the oriented circle, then we use the " + " sign. If they disagree, then we use the "-" sign. See Figure 5.5.
Fig. 5.5. An orientation on MM is given by an oriented circle.
Use the " - " sign when integrating.
Use the " + " sign when integrating.
A more algebraic way to specify an orientation is to simply pick a point pp of MM and choose any 2-form vv\vee on T_(p)R^(3)T_{p} \mathrm{R}^{3} such that v(V^(1)_(p),V^(2)_(p))\mathrm{v}\left(V^{1}{ }_{p}, V^{2}{ }_{p}\right)✓!=0\checkmark \neq 0 whenever V_(p" and ")^(1)V_(p)^(2)V_{p \text { and }}^{1} V_{p}^{2} are vectors tangent to MM, and V_(1)V_{1} is not a multiple of V_(2)V_{2}. Do not confuse this 2 -form with the differential 2form, omega\omega, of Equation 5.2. The 2 -form v is only defined at the single tangent space T_(p)R^(3)T_{p} \mathrm{R}^{3}, whereas omega\omega is defined everywhere.
Let us now see how we can use v to decide whether to use the " + " or "-" sign in Equation 5.2. All we must do is calculate nu((del phi)/(del x)(x_(p),y_(p)),(del phi)/(del y)(x_(p),y_(p)))\nu\left(\frac{\partial \phi}{\partial x}\left(x_{p}, y_{p}\right), \frac{\partial \phi}{\partial y}\left(x_{p}, y_{p}\right)\right), where varphi(x_(p),y_(p))=p\varphi\left(x_{p}, y_{p}\right)=p. If the result is positive, then we will use the " + " sign to alculate the integral in Equation 5.2. If it is negative then we use the "-" sign. Let's see how this works with an example.
Example 21. Let's revisit Example 20. The problem was to integrate the form z^(2)dx^^dyz^{2} d x \wedge d y over MM, the top half of the unit sphere. But no orientation was ever given for MM, so the problem was not very well stated. Let's pick an easy point, pp, on M:(0,sqrt()2//2,sqrt()2//2)M:(0, \sqrt{ } 2 / 2, \sqrt{ } 2 / 2). The vectors (:1,0,0:)_(p)\langle 1,0,0\rangle_{p} and (:0,1,-1:)_(p)\langle 0,1,-1\rangle_{p} in T_(p)R^(3)T_{p} \mathrm{R}^{3} are both tangent to MM. To give an orientation on MM, all we do is specify a 2 -form vv\vee on T_(p)R^(3)T_{p} \mathrm{R}^{3} such that V((:1,0,0:),(:0,1,-1:))!=0\mathrm{V}(\langle 1,0,0\rangle,\langle 0,1,-1\rangle) \neq 0. Let's pick an easy one: v=dx\mathrm{v}=d x^^dy\wedge d y.
Now, let's see what happens when we try to evaluate the integral by using the parameterization varphi^(')(rt)=(-r cos t,r sin t,sqrt()1-r^(2))\varphi^{\prime}(r t)=\left(-r \cos t, r \sin t, \sqrt{ } 1-r^{2}\right). First, note that varphi^(')(sqrt()2//2,pi//2)=(0,sqrt()2//2,sqrt()2//2)\varphi^{\prime}(\sqrt{ } 2 / 2, \pi / 2)=(0, \sqrt{ } 2 / 2, \sqrt{ } 2 / 2) and
Now we check the value of v when this pair is plugged in:
dx^^dy((:0,1,-1:),(:(sqrt2)/(2),0,0:))=|[0,(sqrt2)/(2)],[1,0]|=-(sqrt2)/(2).d x \wedge d y\left(\langle 0,1,-1\rangle,\left\langle\frac{\sqrt{2}}{2}, 0,0\right\rangle\right)=\left|\begin{array}{cc}
0 & \frac{\sqrt{2}}{2} \\
1 & 0
\end{array}\right|=-\frac{\sqrt{2}}{2} .
The sign of this result is "-," so we need to use the negative sign in Equation 5.2 in order to use varphi^(')\varphi^{\prime} to evaluate the integral of omega\omega over MM.
{:[int_(M)omega=-int_(R)omega_(phi(r,t))((delphi^('))/(del r)(r,t),(delphi^('))/(del t)(r,t))dr^^dt],[=-int_(R)(1-r^(2))|[cos t,r sin t],[sin t,r cos t]|drdt=(pi)/(2).]:}\begin{aligned}
\int_{M} \omega & =-\int_{R} \omega_{\phi(r, t)}\left(\frac{\partial \phi^{\prime}}{\partial r}(r, t), \frac{\partial \phi^{\prime}}{\partial t}(r, t)\right) d r \wedge d t \\
& =-\int_{R}\left(1-r^{2}\right)\left|\begin{array}{rr}
\cos t & r \sin t \\
\sin t & r \cos t
\end{array}\right| d r d t=\frac{\pi}{2} .
\end{aligned}
Very often, the surface that we are going to integrate over is given to us by a parameterization. In this case, there is a very natural choice of orientation. Just use the "+' sign in Equation 5.2! We will call this the orientation of MM induced by the parameterization. In other words, if you see a problem phrased like this, "Calculate the integral of the form omega\omega over the surface MM given by parameterization varphi\varphi with the induced orientation," then you should just go back to using Equation 5.1 and do not worry about anything else. In fact, this situation is so common that when you are asked to integrate some form over a surface which is given by a parameterization, but no orientation is specified, then you should assume the induced orientation is the desired one.
5.9. Let MM be the image of the parameterization, varphi(a,b)=(a,a+\varphi(a, b)=(a, a+b,ab)b, a b), where 0 <= a <= 10 \leq a \leq 1 and 0 <= b <= 10 \leq b \leq 1. Integrate the form omega=2z\omega=2 zdx^^dy+ydy^^dz-xdx^^dzd x \wedge d y+y d y \wedge d z-x d x \wedge d z over MM using the orientation induced by varphi\varphi.
There is one subtle technical point here that should be addressed. The novice reader may want to skip this for now. Suppose someone gives you a surface defined by a parameterization and tells you to integrate some 2 -form over it, using the induced orientation. But you are clever, and you realize that if you change parameterizations you
can make the integral easier. Which orientation do you use? The problem is that the orientation induced by your new parameterization may not be the same as the one induced by the original parameterization.
To fix this we need to see how we can define a 2 -form on some tangent space T_(p)R^(3)T_{p} \mathrm{R}^{3}, where pp is a point of M_(", that yields an ")M_{\text {, that yields an }} orientation of MM consistent with the one induced by a parameterization varphi\varphi. This is not so hard. If dx^^dyd x \wedge d y((del phi)/(del x)(x_(p),y_(p)),(del phi)/(del y)(x_(p),y_(p)))_("is ")\left(\frac{\partial \phi}{\partial x}\left(x_{p}, y_{p}\right), \frac{\partial \phi}{\partial y}\left(x_{p}, y_{p}\right)\right)_{\text {is }} is positive, then we simply let v=dx^^dy\mathrm{v}=d x \wedge d y. If it is egative, then we let V=-dx^^dy\mathrm{V}=-d x \wedge d y. In the unlikely event that dx^^dy((del phi)/(del x)(x_(p),y_(p)),(del phi)/(del y)(x_(p),y_(p)))=0d x \wedge d y\left(\frac{\partial \phi}{\partial x}\left(x_{p}, y_{p}\right), \frac{\partial \phi}{\partial y}\left(x_{p}, y_{p}\right)\right)=0 we can remedy things by either changing the point pp or by using dx^^dzd x \wedge d z instead of dx^^dyd x \wedge d y. Once we have defined vv, we know how to integrate MM using any other parameterization.
5.10. Let Psi\Psi be the following parameterization of the sphere of radius one:
psi(theta,phi)=(sin phi cos theta,sin phi sin theta,cos phi)\psi(\theta, \phi)=(\sin \phi \cos \theta, \sin \phi \sin \theta, \cos \phi)
Which of the following 2-forms on T_(((sqrt2)/(2),0,(sqrt2)/(2)))R^(3)T_{\left(\frac{\sqrt{2}}{2}, 0, \frac{\sqrt{2}}{2}\right)} \mathbb{R}^{3} determine the same orientation on the sphere as that induced by Psi\Psi ?
{:[" 1. "alpha=dx^^dy+2dy^^dz". "],[" 2. "beta=dx^^dy-2dy^^dz". "],[" 3. "gamma=dx^^dz.]:}\begin{aligned}
& \text { 1. } \alpha=d x \wedge d y+2 d y \wedge d z \text {. } \\
& \text { 2. } \beta=d x \wedge d y-2 d y \wedge d z \text {. } \\
& \text { 3. } \gamma=d x \wedge d z .
\end{aligned}
5.4 Integrating 1-forms on R^(m)\mathrm{R}^{\boldsymbol{m}}
In the previous sections we saw how to integrate a 2 -form over a region in R^(2)R^{2}, or over a subset of R^(3)R^{3} parameterized by a region in R^(2)R^{2}. The reason that these dimensions were chosen was because there is nothing new that needs to be introduced to move to the general case. In fact, if the reader were to go back and look at what we did, he/she would find that almost nothing would change if we had been talking about nn-forms instead.
Before we jump to the general case, we will work one example showing how to integrate a 1 -form over a parameterized curve. Example 22. Let CC be the curve in R^(2)\mathrm{R}^{2} parameterized by
where 0 <= t <= 20 \leq t \leq 2. Let V be the 1 -form ydx+xdyy d x+x d y. We calculate int_(C)V\int_{\mathrm{C}} \mathrm{V}.
The first step is to calculate
So, dx=2td x=2 t and dy=3t^(2)d y=3 t^{2}. From the parameterization we also know x=t^(2)x=t^{2} and y=t^(3)y=t^{3}. Hence, since v=ydx+xdy\mathrm{v}=y d x+x d y, we have
{:[int_(C)v=int_(0)^(2)v_(phi(t))((d phi)/(dt))dt],[=int_(0)^(2)5t^(4)dt],[=t^(5)|_(0)^(2)],[=32]:}\begin{aligned}
\int_{C} v & =\int_{0}^{2} v_{\phi(t)}\left(\frac{d \phi}{d t}\right) d t \\
& =\int_{0}^{2} 5 t^{4} d t \\
& =\left.t^{5}\right|_{0} ^{2} \\
& =32
\end{aligned}
5.11. Let CC be the curve in R^(3)\mathrm{R}^{3} parameterized by varphi(t)=(t,t^(2),1+:}\varphi(t)=\left(t, t^{2}, 1+\right.tt ), where 0 <= t <= 20 \leq t \leq 2. Integrate the 1-form omega=ydx+zdy+xydz\omega=y d x+z d y+x y d z over CC using the induced orientation.
5.12. Let CC be the curve parameterized by the following:
phi(t)=(2cos t,2sin t,t^(2)),quad0 <= t <= 2.\phi(t)=\left(2 \cos t, 2 \sin t, t^{2}\right), \quad 0 \leq t \leq 2 .
Integrate the 1-form (x^(2)+y^(2))dz\left(x^{2}+y^{2}\right) d z over CC.
5.13. Let CC be the subset of the graph of y=x^(2)y=x^{2} where 0 <= x <= 10 \leq x \leq 1. An orientation on CC is given by the 1-form dxd x on T_((0,0))R^(2)T_{(0,0)} \mathrm{R}^{2}. Let omega\omega be the 1 -form -x^(4)dx+xydy-x^{4} d x+x y d y. Integrate omega\omega over CC.
5.14. Let MM be the line segment in R^(2)R^{2} which connects (0,0)(0,0) to (4, 6). An orientation on MM is specified by the 1 -form -dx-d x on T_((2,3))R^(2)T_{(2,3)} \mathrm{R}^{2}. Integrate the form omega=sin ydx+cos xdy\omega=\sin y d x+\cos x d y over MM.
Just as there was for surfaces, for parameterized curves there is also a pictorial way to specify an orientation. All we have to do is place an arrowhead somewhere along the curve, and ask whether or
not the derivative of the parameterization gives a tangent vector that points in the same direction. We illustrate this in the next example.
Example 23. Let CC be the portion of the graph of x=y^(2)x=y^{2} where 0 <= x0 \leq x<= 1\leq 1, as pictured in Figure 5.6. Notice the arrowhead on CC. We integrate the 1 -form omega=dx+dy\omega=d x+d y over CC with the indicated orientation.
First, parameterize CC as varphi(t)=(t^(2),t)\varphi(t)=\left(t^{2}, t\right), where 0 <= t <= 10 \leq t \leq 1. Now notice that the derivative of varphi\varphi is
Fig. 5.6. An orientation on CC is given by an arrowhead.
At the point (0,0)(0,0) this is the vector (:0,1:)\langle 0,1\rangle, which points in a direction opposite to that of the arrowhead. This tells us to use a negative sign when we integrate, as follows:
5.5 Integrating n\boldsymbol{n}-forms on R^(m)\mathrm{R}^{\boldsymbol{m}}
To proceed to the general case, we need to know what the integral of an nn-form over a region of R^(n)\mathrm{R}^{n} is. The steps to define such an object are precisely the same as before, and the results are similar. If our coordinates in R^(n)\mathrm{R}^{n} are ( x_(1),x_(2),dots,x_(n)x_{1}, x_{2}, \ldots, x_{n} ), then an nn-form is always given by
f(x_(1),dots,x_(n))dx_(1)^^dx_(2)^^dots^^dx_(n)f\left(x_{1}, \ldots, x_{n}\right) d x_{1} \wedge d x_{2} \wedge \ldots \wedge d x_{n}
Going through the steps, we find that the definition of int omega\int \omega is exactly the same as the definition we learned in Chapter 1 for int_(R^(n))fdx_(1)dx_(2)dots dx_(n)\int_{\mathbb{R}^{n}} f d x_{1} d x_{2} \ldots d x_{n}
5.15. Let Omega\Omega be the cube in R^(3)\mathrm{R}^{3}
{(x,y,z)∣0 <= x,y,z <= 1}.\{(x, y, z) \mid 0 \leq x, y, z \leq 1\} .
Let yy be the 3-form x^(2)zdx^^dy^^dzx^{2} z d x \wedge d y \wedge d z. Calculate int gamma.Omega\int \gamma . \Omega
Moving on to integrals of nn-forms over subsets of R^(m)\mathrm{R}^{m} parameterized by a region in R^(n)\mathrm{R}^{n}, we again find nothing surprising. Suppose we denote such a subset by MM. Let varphi:R subR^(n)rarr M subR^(m)\varphi: R \subset \mathrm{R}^{n} \rightarrow M \subset \mathrm{R}^{m}
be a parameterization. Then we find that the following generalization of Equation 5.2 must hold:
To decide whether or not to use the negative sign in this equation we must specify an orientation. Again, one way to do this is to give an nn-form V on T_(p)R^(m)T_{p} \mathrm{R}^{m}, where pp is some point of MM. We use the negative sign when the value of is negative, where varphi(x_(1),dotsx_(n))=p\varphi\left(x_{1}, \ldots x_{n}\right)=p. If MM was originally given by a parameterization and we are instructed to use the induced orientation, then we can ignore the negative sign.
Example 24. Suppose varphi(a,b,c)=(a+b,a+c,bc,a^(2))\varphi(a, b, c)=\left(a+b, a+c, b c, a^{2}\right), where 0 <=0 \leqa,b,c <= 1a, b, c \leq 1. Let MM be the image of varphi\varphi with the induced orientation. Suppose omega=dy^^dz^^dw-dx^^dz^^dw-2ydx^^dy^^dz\omega=d y \wedge d z \wedge d w-d x \wedge d z \wedge d w-2 y d x \wedge d y \wedge d z. Then,
{:[int_(M)omega=int_(R)omega_(phi(a,b,c))((del phi)/(del a)(a,b,c),(del phi)/(del b)(a,b,c),(del phi)/(del c)(a,b,c))da^^db^^dc],[=int_(R)omega_(phi(a,b,c))((:1","1","0","2a:)","(:1","0","c","0:)","(:0","1","b","0:))da^^db^^dc],[=int_(R)|[1,0,1],[0,c,b],[2a,0,0]|-|[1,1,0],[0,c,b],[2a,0,0]|-2(a+c)|[1,1,0],[1,0,1],[0,c,b]|da^^db^^dc],[=int_(0)^(1)int_(0)^(1)int_(0)^(1)2bc+2c^(2)dadbdc=(7)/(6).]:}\begin{aligned}
\int_{M} \omega & =\int_{R} \omega_{\phi(a, b, c)}\left(\frac{\partial \phi}{\partial a}(a, b, c), \frac{\partial \phi}{\partial b}(a, b, c), \frac{\partial \phi}{\partial c}(a, b, c)\right) d a \wedge d b \wedge d c \\
& =\int_{R} \omega_{\phi(a, b, c)}(\langle 1,1,0,2 a\rangle,\langle 1,0, c, 0\rangle,\langle 0,1, b, 0\rangle) d a \wedge d b \wedge d c \\
& =\int_{R}\left|\begin{array}{rrr}
1 & 0 & 1 \\
0 & c & b \\
2 a & 0 & 0
\end{array}\right|-\left|\begin{array}{rrr}
1 & 1 & 0 \\
0 & c & b \\
2 a & 0 & 0
\end{array}\right|-2(a+c)\left|\begin{array}{lll}
1 & 1 & 0 \\
1 & 0 & 1 \\
0 & c & b
\end{array}\right| d a \wedge d b \wedge d c \\
& =\int_{0}^{1} \int_{0}^{1} \int_{0}^{1} 2 b c+2 c^{2} d a d b d c=\frac{7}{6} .
\end{aligned}
5.6 The change of variables formula
There is a special case of Equation 5.3 which is worth noting. Suppose Psi\Psi is a parameterization that takes some subregion, RR, of R^(n)\mathrm{R}^{n} into some other subregion, MM, of R^(n)\mathrm{R}^{n} and omega\omega is an nn-form on R^(n)\mathrm{R}^{n}. At each point, omega\omega is just a volume form, so it can be written as f(x_(1),dots:}f\left(x_{1}, \ldots\right., {:x_(n))dx_(1)^^dots^^dx_(n)\left.x_{n}\right) d x_{1} \wedge \ldots \wedge d x_{n}. If we let x=(x_(1),dotsx_(n))\mathrm{x}=\left(x_{1}, \ldots x_{n}\right) then Equation 5.3 can be written as:
{:(5.4)int_(M)f( bar(x))dx_(1)dots dx_(n)=+-int_(R)f(phi( bar(x)))|(del phi)/(delx_(1))(( bar(x)))dots(del phi)/(delx_(n))(( bar(x)))|dx_(1)dots dx_(n):}\begin{equation*}
\int_{M} f(\bar{x}) d x_{1} \ldots d x_{n}= \pm \int_{R} f(\phi(\bar{x}))\left|\frac{\partial \phi}{\partial x_{1}}(\bar{x}) \ldots \frac{\partial \phi}{\partial x_{n}}(\bar{x})\right| d x_{1} \ldots d x_{n} \tag{5.4}
\end{equation*}
The bars |*||\cdot| indicate that we take the determinant of the matrix whose column vectors are (del phi)/(delx_(i))( bar(x))\frac{\partial \phi}{\partial x_{i}}(\bar{x}).
5.6.1 1-forms on ^(1){ }^{1}
When n=1n=1 this is just the substitution rule for integration from calculus. We demonstrate this in the following example.
Example 25. Let's integrate the 1 -form omega=sqrt()udu\omega=\sqrt{ } u d u over the interval [1,5][1,5]. This would be easy enough to do directly, but using a parameterization of this interval will be instructive. Let varphi:[0,2]rarr\varphi:[0,2] \rightarrow[1,5][1,5] be the parameterization given by varphi(x)=x^(2)+1\varphi(x)=x^{2}+1. Then (d phi)/(dx)=2x\frac{d \phi}{d x}=2 x . Now we compute:
{:[int_(1)^(5)sqrtudu=int_([1,5])omega=int_([0,2])omega_(phi(x))((d phi)/(dx))dx],[=int_([0,2])omega_(x^(2)+1)((2x:))dx],[=int_([0,2])^(2)2xsqrt(x^(2)+1)dx],[=int_(0)^(2)2xsqrt(x^(2)+1)dx]:}\begin{aligned}
\int_{1}^{5} \sqrt{u} d u=\int_{[1,5]} \omega & =\int_{[0,2]} \omega_{\phi(x)}\left(\frac{d \phi}{d x}\right) d x \\
& =\int_{[0,2]} \omega_{x^{2}+1}((2 x\rangle) d x \\
& =\int_{[0,2]}^{2} 2 x \sqrt{x^{2}+1} d x \\
& =\int_{0}^{2} 2 x \sqrt{x^{2}+1} d x
\end{aligned}
Reading this backwards is doing the integral0 int_(0)^(2)2xsqrt(x^(2)+1)dx\int_{0}^{2} 2 x \sqrt{x^{2}+1} d x by " uu - - substitution" substitution."
5.6.2 2-forms on ^(2){ }^{2}
For other nn, Equation 5.4 is the general change of variables formula.
Example 26. We will use the parameterization Psi(u,U)=(u,u^(2)+U^(2))\Psi(u, U)=\left(u, u^{2}+U^{2}\right) to evaluate where RR is the region of the xyx y-plane bounded by the parabolas y=x^(2)y=x^{2} and y=x^(2)+4y=x^{2}+4, and the lines x=0x=0 and x=1x=1.
∬_(R)(x^(2)+y)dA\iint_{R}\left(x^{2}+y\right) d A
The first step is to find out what the limits of integration will be when we change coordinates.
{:[∬_(R)(x^(2)+y)dA=int_(R)(x^(2)+y)dx^^dy],[=int_(0)^(2)int_(0)^(1)u^(2)+(u^(2)+v^(2))|[1,0],[2u,2v]|dudv],[=int_(0)^(2)int_(0)^(1)4vu^(2)+2v^(3)dudv],[=int_(0)^(2)(4)/(3)v+2v^(3)dv],[=(8)/(3)+8=(32)/(3)]:}\begin{aligned}
\iint_{R}\left(x^{2}+y\right) d A & =\int_{R}\left(x^{2}+y\right) d x \wedge d y \\
& =\int_{0}^{2} \int_{0}^{1} u^{2}+\left(u^{2}+v^{2}\right)\left|\begin{array}{rr}
1 & 0 \\
2 u & 2 v
\end{array}\right| d u d v \\
& =\int_{0}^{2} \int_{0}^{1} 4 v u^{2}+2 v^{3} d u d v \\
& =\int_{0}^{2} \frac{4}{3} v+2 v^{3} d v \\
& =\frac{8}{3}+8=\frac{32}{3}
\end{aligned}
Example 27. In our second example, we revisit Fubini's theorem, which says that the order of integration does not matter in a multiple integral. Recall from Section 5.2 the curious fact that int fdxdy=int f\int f d x d y=\int fdx^^dyd x \wedge d y, but int fdydx!=int fdy^^dx\int f d y d x \neq \int f d y \wedge d x. We are now prepared to see why this s.
Let's suppose we want to integrate the function f(x,y)f(x, y) over the rectangle RR in R^(2)\mathrm{R}^{2} with vertices at ( 0,0 ), ( a,0a, 0 ), ( 0,b0, b ) and ( a,ba, b ). We know the answer is just int_(0)^(b)int_(0)^(a)f(x,y)dxdy\int_{0}^{b} \int_{0}^{a} f(x, y) d x d y.
. We also know this integral is equal to int_(R)fdx^^\int_{\mathrm{R}} f d x \wedgedyd y, where RR s given the "standard" orientation (e.g., the one specifie by a counter-clockwise oriented circle).
Let's see what happens if we try to compute the integral using the following parameterization:
phi(y,x)=(x,y),0 <= y <= b,0 <= x <= a.\phi(y, x)=(x, y), 0 \leq y \leq b, 0 \leq x \leq a .
Next we have to deal with the issue of orientation. The pair of vectors we just found, (:0,1:)\langle 0,1\rangle and (:1,0:)\langle 1,0\rangle are in an order which does not agree with the orientation of RR. So we have to use the negative sign when employing Equation 5.4:
{:[int_(R)f(x","y)dxdy=-int_(R)f(phi(y","x))|(del phi)/(del y)(del phi)/(del x)|dydx],[=-int_(R)f(x","y)|[0,1],[1,0]|dy^^dx],[=-int_(R)f(x","y)(-1)dy^^dx],[=int_(R)f(x","y)dydx.]:}\begin{aligned}
\int_{R} f(x, y) d x d y & =-\int_{R} f(\phi(y, x))\left|\frac{\partial \phi}{\partial y} \frac{\partial \phi}{\partial x}\right| d y d x \\
& =-\int_{R} f(x, y)\left|\begin{array}{ll}
0 & 1 \\
1 & 0
\end{array}\right| d y \wedge d x \\
& =-\int_{R} f(x, y)(-1) d y \wedge d x \\
& =\int_{R} f(x, y) d y d x .
\end{aligned}
From the above, we see one of the reasons why Fubini's theorem is true is because when the order of integration is switched there are two negative signs. So, int_(R)fdydx\int_{\mathrm{R}} f d y d x actually does equal int_(R)fdy^^dx\int_{R} f d y \wedge d x but only if you remember to swit hh the orientation of RR !
5.16. Let EE be the region in R^(2)R^{2} parameterized by Psi(u,U)=(u^(2)+U:}\Psi(u, U)=\left(u^{2}+U\right.^(2),2u(u){ }^{2}, 2 u(u), where 0 <= u <= 10 \leq u \leq 1 and 0 <= U <= 10 \leq U \leq 1. Evaluate
int_(E)(1)/(sqrt(x-y))dA.\int_{E} \frac{1}{\sqrt{x-y}} d A .
Up until this point, we have only seen how to integrate functions ff(x,y)(x, y) over regions in the plane which are rectangles. Let's now see how we can use parameterizations to integrate over more general regions. Suppose first, that RR is the region of the plane below the graph of y=g(x)y=g(x), above the xx-axis, and between the lines x=ax=a and x=bx=b.
The region RR can be parameterized by where a <= u <= ba \leq u \leq b and 0 <= U0 \leq U<= 1\leq 1. The partials of this parameterization are
dx^^dy=|[1,0],[v(dg(u))/(du)g(u)]|=g(u).d x \wedge d y=\left|\begin{array}{rr}
1 & 0 \\
v \frac{d g(u)}{d u} g(u)
\end{array}\right|=g(u) .
We conclude with the identity
int_(R)f(x,y)dydx=int_(a)^(b)int_(0)^(1)f(u,vg(u))g(u)dvdu\int_{R} f(x, y) d y d x=\int_{a}^{b} \int_{0}^{1} f(u, v g(u)) g(u) d v d u
5.17. Let RR be the region below the graph of y=x^(2)y=x^{2}, and between the lines x=0x=0 and x=2x=2. Calculate
int_(R)xy^(2)dxdy\int_{R} x y^{2} d x d y
A slight variant is to integrate over a region bounded by the graphs of equations y=g_(1)(x)y=g_{1}(x) and y=g_(2)(x)y=g_{2}(x), and by the lines x=ax=a and x=bx=b, where g_(1)(x) < g_(2)(x)g_{1}(x)<g_{2}(x) for all x in[a,b]x \in[a, b]. To compute such an integral we may simply integrate over the region below g_(2)(x)g_{2}(x), then integrate over the region below g_(1)(x)g_{1}(x), and subtract.
5.18. Let RR be the region to the right of the yy-axis, to the left of the graph of x=g(y)x=g(y), above the line y=ay=a and below the line y=by=b. Find a formula for int_(R)f(x,y)dxdy\int_{\mathrm{R}} f(x, y) d x d y.
5.19. Let RR be the region in the first quadrant of R^(2)\mathrm{R}^{2}, below the line y=xy=x, and bounded by x^(2)+y^(2)=4x^{2}+y^{2}=4. Integrate the 2-form over RR.
omega=(1+(y^(2))/(x^(2)))dx^^dy\omega=\left(1+\frac{y^{2}}{x^{2}}\right) d x \wedge d y
5.20. Let RR be the region of the xyx y-plane bounded by the ellipse
9x^(2)+4y^(2)=369 x^{2}+4 y^{2}=36
Integrate the 2 -form omega=x^(2)dx^^dy\omega=x^{2} d x \wedge d y over RR (Hint. see Problem 2.23 of Chapter 1.)
5.21. Integrate the 2 -form over the top half of the unit sphere using the following parameterization from rectangular coordinates: where sqrt()x^(2)+y^(2) <= 1\sqrt{ } x^{2}+y^{2} \leq 1. Compare your answer to Problem 5.3.
{:[omega=(1)/(x)dy^^dz-(1)/(y)dx^^dz],[(x","y)rarr(x,y,sqrt(1-x^(2)-y^(2)))]:}\begin{aligned}
& \omega=\frac{1}{x} d y \wedge d z-\frac{1}{y} d x \wedge d z \\
& (x, y) \rightarrow\left(x, y, \sqrt{1-x^{2}-y^{2}}\right)
\end{aligned}
5.6.3 3-forms on 3
Example 28. Let V={(r,theta,z)∣1 <= r <= 2,0 <= z <= 1}V=\{(r, \theta, z) \mid 1 \leq r \leq 2,0 \leq z \leq 1\}. ( VV is the region between the cylinders of radii one and two and between the planes zz=0=0 and z=1z=1.) Let's calculate
int_(V)z(x^(2)+y^(2))dx^^dy^^dz\int_{V} z\left(x^{2}+y^{2}\right) d x \wedge d y \wedge d z
The region VV is best parameterized using cylindrical coordinates: Psi(r,theta,z)=(r cos theta,r sin theta,z)\Psi(r, \theta, z)=(r \cos \theta, r \sin \theta, z),
" where "1 <= r <= 2,1 <= theta <= 2pi", and "0 <= z <= 1\text { where } 1 \leq r \leq 2,1 \leq \theta \leq 2 \pi \text {, and } 0 \leq z \leq 1 We compute the partials:
{:[int_(V)z(x^(2)+y^(2))dx^^dy^^dz=int_(0)^(1)int_(0)^(2pi)int_(1)^(2)(zr^(2))(r)drd theta dz],[=int_(0)^(1)int_(0)^(2pi)int_(1)^(2)zr^(3)drd theta dz],[=(15)/(4)int_(0)^(1)int_(0)^(2pi)zd theta dz],[=(15 pi)/(2)int_(0)^(1)zdz],[=(15 pi)/(4)]:}\begin{aligned}
\int_{V} z\left(x^{2}+y^{2}\right) d x \wedge d y \wedge d z & =\int_{0}^{1} \int_{0}^{2 \pi} \int_{1}^{2}\left(z r^{2}\right)(r) d r d \theta d z \\
& =\int_{0}^{1} \int_{0}^{2 \pi} \int_{1}^{2} z r^{3} d r d \theta d z \\
& =\frac{15}{4} \int_{0}^{1} \int_{0}^{2 \pi} z d \theta d z \\
& =\frac{15 \pi}{2} \int_{0}^{1} z d z \\
& =\frac{15 \pi}{4}
\end{aligned}
5.22. Integrate the 3-form omega=xdx^^dy^^dz\omega=x d x \wedge d y \wedge d z over the region of R^(3)\mathrm{R}^{3} in the first octant bounded by the cylinders x^(2)+y^(2)=1x^{2}+y^{2}=1 and x^(2)+y^(2)x^{2}+y^{2}=4=4, and the plane z=2z=2.
5.23. Let RR be the region in the first octant of R^(3)R^{3} bounded by the spheres x^(2)+y^(2)+z^(2)=1x^{2}+y^{2}+z^{2}=1 and x^(2)+y^(2)+z^(2)=4x^{2}+y^{2}+z^{2}=4. Integrate the 3-form omega=dx^^dy^^dz\omega=d x \wedge d y \wedge d z over RR.
2sqrt(1+x^(2)+y^(2))dx^^dy^^dz2 \sqrt{1+x^{2}+y^{2}} d x \wedge d y \wedge d z
5.24. Let VV be the volume in the first octant, inside the cylinder of radius one, and below the plane z=1z=1. Integrate the 3 -form over VV. 5.25. Let VV be the region inside the cylinder of radius one, centered around the zz-axis, and between the planes z=0z=0 and z=2z=2. Integrate the function f(x,y,z)=zf(x, y, z)=z over VV.
5.7 Summary: How to integrate a differential form
5.7.1 The steps
To compute the integral of a differential nn-form, omega\omega, over a region, SS, the steps are as follows:
Choose a parameterization, Psi:R rarr S\Psi: R \rightarrow S, where RR is a subset of R^(n)\mathrm{R}^{n} (see Figure 5.7).
Fig. 5.7.
2. Find all nn vectors given by the partial derivatives of Psi\Psi. Geometrically, these are tangent vectors to SS which span its tangent space (see Figure 5.8).
3. Plug the tangent vectors into omega\omega at the point Psi(u_(1),u_(2),dots,u_(n))\Psi\left(u_{1}, u_{2}, \ldots, u_{n}\right).
4. Integrate the resulting function over RR.
Fig. 5.8.
5.7.2 Integrating 2-forms
The best way to see the above steps in action is to look at the integral of a 2 -form over a surface in R^(3)\mathrm{R}^{3}. In general, such a 2 -form is given by omega=f_(1)(x,y,z)dx^^dy+f_(2)(x,y,z)dy^^dz+f_(3)(x,y,z)dx^^dz\omega=f_{1}(x, y, z) d x \wedge d y+f_{2}(x, y, z) d y \wedge d z+f_{3}(x, y, z) d x \wedge d z.
To integrate omega\omega over SS we now follow the steps:
Choose a parameterization, Psi:R rarr S\Psi: R \rightarrow S, where RR is a subset of R^(2)\mathrm{R}^{2}.
Plug the tangent vectors into omega\omega at the point Psi(u,U)\Psi(u, U). To do this, x,yx, y and zz will come from the coordinates of Psi\Psi. That is, x=g_(1)(u,uu)x=g_{1}(u, \cup), y=g_(2)(u,U)y=g_{2}(u, \mathrm{U}) and z=g_(3)(u,U)z=g_{3}(u, \mathrm{U}). Terms like dx^^dyd x \wedge d y are determinants of 2xx22 \times 2 matrices, whose entries come from the vectors computed in the previous step. Geometrically, the value of dx^^dyd x \wedge d y is the area of the parallelogram spanned by the vectors (del Psi)/(del u)\frac{\partial \Psi}{\partial u} and (del Psi)/(del v)\frac{\partial \Psi}{\partial v} (tangent vectors to SS ), projected onto the dxdyd x d y-plane (see Figure 5.9).
The result of all this is:
Fig. 5.9. Evaluating dx^^dyd x \wedge d y geometrically.
Note that simplifying this gives a function of uu and UU.
4. Integrate the resulting function over RR. In other words, if h(u,U)h(u, U) is the function you ended up with in the previous step, then compute
int_(R)h(u,v)dudv.\int_{R} h(u, v) d u d v .
If RR is not a rectangle you may have to find a parameterization of RR whose domain is a rectangle and repeat the above steps to compute this integral.
5.7.3 A sample 2-form
Let omega=(x^(2)+y^(2))dx^^dy+zdy^^dz\omega=\left(x^{2}+y^{2}\right) d x \wedge d y+z d y \wedge d z. Let SS denote the subset of the cylinder x^(2)+y=1x^{2}+y=1 that lies between the planes z=0z=0 and z=1z=1.
Choose a parameterization, Psi:R rarr S\Psi: R \rightarrow S.
Where R={(theta,z)∣0 <= theta <= 2pi,0 <= z <= 1}R=\{(\theta, z) \mid 0 \leq \theta \leq 2 \pi, 0 \leq z \leq 1\}.
2. Find both vectors given by the partial derivatives of Psi\Psi.
This simplifies to the function z cos thetaz \cos \theta.
4. Integrate the resulting function over RR.
int_(0)^(1)int_(0)^(2pi)z cos theta d theta dz\int_{0}^{1} \int_{0}^{2 \pi} z \cos \theta d \theta d z
Note that the integrand comes from Step 3 and the limits of integration come from Step 1.
6
Differentiation of Forms
6.1 The derivative of a differential 1-form
The goal of this section is to figure out what we mean by the derivative of a differential form. One way to think about a derivative is as a function which measures the variation of some other function. Suppose omega\omega is a 1 -form on R^(2)R^{2}. What do we mean by the "variation" of omega\omega ? One thing we can try is to plug in a vector field VV. The result is a function from R^(2)\mathrm{R}^{2} to R . We can then think about how this function varies near a point pp of R^(2)\mathrm{R}^{2}. But pp can vary in a lot of ways, so we need to pick one. In Section 1.5, we learned how to take another vector, WW, and use it to vary pp. Hence, the derivative of omega\omega, which we shall denote " a omegaa \omega," is a function that acts on both VV and WW. In other words, it must be a 2 -form!
Let's recall how to vary a function f(x,y)f(x, y) in the direction of a vector WW at a point pp. This was precisely the definition of the directional derivative:
Going back to the 1 -form omega\omega and the vector field VV, we take the directional derivative of the function omega(V)\omega(V). Let's do this now for a
specific example. Suppose omega=ydx-x^(2)dy,V=(:1,2:),w=(:2,3:)\omega=y d x-x^{2} d y, V=\langle 1,2\rangle, w=\langle 2,3\rangle, and p=(1,1)p=(1,1). Then omega(V)\omega(V) is the function y-2x^(2)y-2 x^{2}. Now we compute:
At the point p=(1,1)p=(1,1) this is the number -5 .
What about the variation of omega(W)\omega(W), in the direction of VV, at the point pp ? The function omega(W)\omega(W) is 2y-3x^(2)2 y-3 x^{2}. We now compute:
At the point p=(1,1)p=(1,1) this is the number -2 .
This is a small problem. We want d omegad \omega, the derivative of omega\omega, to be a 2 -form. Hence, a omega(V,W)a \omega(V, W) should equal -a omega(W,V)-a \omega(W, V). How can we use the variations above to define a omegaa \omega so this is true? Simple. We just define it to be the difference in these variations:
{:(6.1)d omega(V","W)=grad_(V)omega(W)-grad_(W)omega(V).:}\begin{equation*}
d \omega(V, W)=\nabla_{V} \omega(W)-\nabla_{W} \omega(V) . \tag{6.1}
\end{equation*}
Hence, in the above example, a omega((:1,2:),(:2,3:))a \omega(\langle 1,2\rangle,\langle 2,3\rangle), at the point p=p=(1,1)(1,1), is the number -2-(-5)=3-2-(-5)=3.
6.1. Suppose w=xy^(2)dx+x^(3)zdy-(y+z^(9))dz,V=(:1,2,3:)w=x y^{2} d x+x^{3} z d y-\left(y+z^{9}\right) d z, V=\langle 1,2,3\rangle, and WW=(:-1,0,1:)=\langle-1,0,1\rangle.
Compute grad_(nu)omega(W)\nabla_{\nu} \omega(W) and grad_(h)omega(V)\nabla_{h} \omega(V), at the point (2, 3, -1).
Use your answer to the previous question to compute a omega(Va \omega(V, WW ) at the point (2,3,-1)(2,3,-1).
There are other ways to determine what a omegaa \omega is rather than using Equation 6.1. Recall that a 2 -form acts on a pair of vectors by projecting them onto each coordinate plane, calculating the area
they span, multiplying by some constant, and adding. So the 2 -form is completely determined by the constants that you multiply by after projecting. In order to figure out what these constants are, we are free to examine the action of the 2 -form on any pair of vectors. For example, suppose we have two vectors that lie in the xyx y-plane and span a parallelogram with area one. If we run these through some 2 -form and end up with the number five, then we know that the multiplicative constant for that 2 -form, associated with the xyx y-plane is 5 . This, in turn, tells us that the 2 -form equals 5dx^^dy+v5 d x \wedge d y+\mathrm{v}. To figure out what vv is, we can examine the action of the 2 -form on other pairs of vectors.
Let's try this with a general differential 2-form on R^(3)\mathrm{R}^{3}. Such a form always looks like a omega=a(x,y,z)dx^^dy+b(x,y,z)dy^^dz+c(x,ya \omega=a(x, y, z) d x \wedge d y+b(x, y, z) d y \wedge d z+c(x, y, z)dx^^dzz) d x \wedge d z. To figure out what a(x,y,z)a(x, y, z) is, for example, all we need to do is determine what a omegaa \omega does to the vectors (:1,0,0:)_((x,y,z))\langle 1,0,0\rangle_{(x, y, z)} and (:0\langle 0, 1,0:)_((x,y,z))1,0\rangle_{(x, y, z)}. Let's compute this using Equation 6.1, assuming omega=f(x\omega=f(x, y,z)dx+g(x,y,z)dy+h(x,y,z)dzy, z) d x+g(x, y, z) d y+h(x, y, z) d z.
d omega=((del g)/(del x)-(del f)/(del y))dx^^dy+((del h)/(del y)-(del g)/(del z))dy^^dz+((del h)/(del x)-(del f)/(del z))dx^^dz.d \omega=\left(\frac{\partial g}{\partial x}-\frac{\partial f}{\partial y}\right) d x \wedge d y+\left(\frac{\partial h}{\partial y}-\frac{\partial g}{\partial z}\right) d y \wedge d z+\left(\frac{\partial h}{\partial x}-\frac{\partial f}{\partial z}\right) d x \wedge d z .
6.2. Suppose omega=f(x,y)dx+g(x,y)dy\omega=f(x, y) d x+g(x, y) d y is a 1-form on R^(2)\mathrm{R}^{2}. Show that a omega=((del g)/(del x)-(del f)/(del y))dx^^dya \omega=\left(\frac{\partial g}{\partial x}-\frac{\partial f}{\partial y}\right) d x \wedge d y.
6.3. If omega=ydx-x^(2)dy\omega=y d x-x^{2} d y, find a omegaa \omega. Verify that a omega((:1,2:),(:2,3:))=3a \omega(\langle 1,2\rangle,\langle 2,3\rangle)=3 at the point (1,1)(1,1).
Technical Note: Equation 6.1 defines the value of awa w as long as the vector fields VV and WW are constant. If non-constant vector fields are used, then the answer provided by Equation 6.1 will involve partial derivatives of the components of VV and WW, and hence will not be a differential form. Despite this Equation 6.1 does lead to the correct formulas for aw, as in Exercise 6.2 above. Once such formulas are obtained then any vector fields can be plugged in.
6.2 Derivatives of n\boldsymbol{n}-forms
Before jumping to the general case, let's look at the derivative of a 2 -form. A 2 -form, omega\omega, acts on a pair of vector fields, VV and WW, to return a function. To find a variation of omega\omega we can examine how this function varies in the direction of a third vector, UU, at some point pp. Hence, whatever aw turns out to be, it will be a function of the vectors U,VU, V, and WW at each point pp. So, we would like to define it to be a 3 -form.
Let's start by looking at the variation of omega(V,W)\omega(V, W) in the direction of UU. We write this as grad_(L)omega(V,W)\nabla_{\mathcal{L}} \omega(V, W). If we were to define this as the value of a omega(U,V,W)a \omega(U, V, W), we would find that, in general, it would not be alternating. That is, usually grad_(l)omega(V,W)!=-grad_(l)omega(U,W)\nabla_{l} \omega(V, W) \neq-\nabla_{l} \omega(U, W). To remedy this, we simply define a omegaa \omega to be the alternating sum of all the variations:
We leave it to the reader to check that a omegaa \omega is alternating and multilinear (assuming U,VU, V, and WW are constant vector fields).
It should not be hard for the reader to now jump to the general case. Suppose omega\omega is an nn-form and V^(1),dots,V^(n+1)V^{1}, \ldots, V^{n+1} are n+1n+1 vector fields. Then we define
In other words, a omegaa \omega, applied to n+1n+1 vectors, is the alternating sum of the variations of omega\omega applied to nn of those vectors in the direction of the remaining one. Note that we can think of " dd " as an operator which takes nn-forms to ( n+1n+1 )-forms.
6.4. Show that dwd w is alternating.
6.5. Show that d(w+v)=aw+avd(w+v)=a w+a v and d(c omega)=cawd(c \omega)=c a w, for any constant cc.
6.6. Suppose omega=f(x,y,z)dx^^dy+g(x,y,z)dy^^dz+h(x,y,z)\omega=f(x, y, z) d x \wedge d y+g(x, y, z) d y \wedge d z+h(x, y, z)dx^^dzd x \wedge d z. Find awa w (Hint. Compute aw((:1,0,0:),(:0,1,0:),(:0,0,1:))a w(\langle 1,0,0\rangle,\langle 0,1,0\rangle,\langle 0,0,1\rangle) ). Compute d(x^(2)ydx^^dy+y^(2)zdy^^dz)d\left(x^{2} y d x \wedge d y+y^{2} z d y \wedge d z\right).
6.3 Interlude: 0-forms
Let's go back to Section 4.1, when we introduced coordinates for vectors. At that time, we noted that if CC was the graph of the function y=f(x)y=f(x) and pp was a point of CC, then the tangent line to CC at pp lies in T_(p)R^(2)T_{p} \mathrm{R}^{2} and has equation dy=mdxd y=m d x, for some constant, mm. Of course, if p=(x_(0),y_(0))p=\left(x_{0}, y_{0}\right), then mm is just the derivative of ff evaluated at x_(0)x_{0}.
Now, suppose we had looked at the graph of a function of 2variables, z=f(x,y)z=f(x, y), instead. At some point, p=(x_(0),y_(0),z_(0))p=\left(x_{0}, y_{0}, z_{0}\right), on the graph we could look at the tangent plane, which lies in T_(p)R^(3)T_{p} \mathrm{R}^{3}. Its equation is dz=m_(1)dx+m_(2)dyd z=m_{1} d x+m_{2} d y. Since ^(z)=f(x,y),m_(1)=(del f)/(del x)(x_(0),y_(0))^{z}=f(x, y), m_{1}=\frac{\partial f}{\partial x}\left(x_{0}, y_{0}\right) and ^(m_(2))=(del f)/(del y)(x_(0),y_(0))^{m_{2}}=\frac{\partial f}{\partial y}\left(x_{0}, y_{0}\right), we can rewrite this as
df=(del f)/(del x)dx+(del f)/(del y)dy.d f=\frac{\partial f}{\partial x} d x+\frac{\partial f}{\partial y} d y .
Notice that the right-hand side of this equation is a differential 1form. This is a bit strange; we applied the " dd " operator to something and the result was a 1 -form. However, we know that when we apply the " dd " operator to a differential nn-form we get a differential (n+1)(n+1) form. So, it must be that f(x,y)f(x, y) is a differential 0 -form on R^(2)\mathrm{R}^{2} !
In retrospect, this should not be so surprising. After all, the input to a differential nn-form on R^(m)\mathrm{R}^{m} is a point, and nn vectors based at that point. So, the input to a differential 0 -form should be a point of R^(m)\mathrm{R}^{m},
and no vectors. In other words, a 0 -form on R^(m)\mathrm{R}^{m} is just another word for a real-valued function on R^(m)\mathrm{R}^{m}.
Let's extend some of the things we can do with forms to 0-forms. Suppose ff is a 0 -form, and omega\omega is an nn-form (where nn may also be 0 ). What do we mean by f^^omegaf \wedge \omega ? Since the wedge product of an nn-form and an mm-form is an ( n+mn+m )-form, it must be that f^^omegaf \wedge \omega is an nn form. It is hard to think of any other way to define this as just the product, f omegaf \omega.
What about integration? Remember that we integrate nn-forms over subsets of R^(m)\mathrm{R}^{m} that can be parameterized by a subset of R^(n)\mathrm{R}^{n}. So 0 -forms get integrated over things parameterized by R^(0)\mathrm{R}^{0}. In other words, we integrate a 0 -form over a point. How do we do this? We do the simplest possible thing; define the value of a 0 -form, ff, integrated over the point, pp, to be +-f(p)\pm f(p). To specify an orientation we just need to say whether or not to use the - sign. We do this just by writing "- pp " instead of " pp " when we want the integral of ff over pp to be -f(p)-f(p).
One word of caution here...beware of orientations! If p inR^(n)p \in \mathrm{R}^{n}, then we use the notation " -p-p " to denote pp with the negative orientation. So if p=-3inR^(1)p=-3 \in \mathrm{R}^{1}, then -p-p is not the same as the point, 3. -p-p is just the point, -3 , with a negative orientation. So, if f(x)=x^(2)f(x)=x^{2}, then int_(-p)f\int_{-p} f=-f(p)=-9=-f(p)=-9.
6.7. If ff is the 0 -form x^(2)y^(3),px^{2} y^{3}, p is the point (-1,1),q(-1,1), q is the point ( 1 , -1)-1), and rr is the point ( -1,-1-1,-1 ), then compute the integral of ff over the points -p,-q-p,-q, and -r-r, with the indicated orientations.
Let's go back to our exploration of derivatives of nn-forms. Suppose f(x,y)dxf(x, y) d x is a 1 -form on R^(2)\mathrm{R}^{2}. Then we have already shown that d(fdx)=(del f)/(del y)dy^^dxd(f d x)=\frac{\partial f}{\partial y} d y \wedge d x. We now compute:
{:[df^^dx=((del f)/(del x)dx+(del f)/(del y)dy)^^dx],[=(del f)/(del x)dx^^dx+(del f)/(del y)dy^^dx],[=(del f)/(del y)dy^^dx],[=d(fdx).]:}\begin{aligned}
d f \wedge d x & =\left(\frac{\partial f}{\partial x} d x+\frac{\partial f}{\partial y} d y\right) \wedge d x \\
& =\frac{\partial f}{\partial x} d x \wedge d x+\frac{\partial f}{\partial y} d y \wedge d x \\
& =\frac{\partial f}{\partial y} d y \wedge d x \\
& =d(f d x) .
\end{aligned}
6.8. If ff is a 0 -form, show that d(fdx_(1)^^dx_(2)^^dots^^dx_(n))=df^^dx_(1)d\left(f d x_{1} \wedge d x_{2} \wedge \ldots \wedge d x_{n}\right)=d f \wedge d x_{1}^^dx_(2)^^dots^^dx_(n)\wedge d x_{2} \wedge \ldots \wedge d x_{n}.
6.9. Prove: d(a omega)=0d(a \omega)=0.
6.10. If omega\omega is an nn-form, and mu\mu is an mm-form, then show that d(omega^^d(\omega \wedgemu)=a omega^^mu+(-1)^(n)omega^^d mu\mu)=a \omega \wedge \mu+(-1)^{n} \omega \wedge d \mu.
6.4 Algebraic computation of derivatives
As in Section 4.7 we break with the spirit of the text to list the identities we have acquired, and work a few examples.
Let omega\omega be an nn-form, mu\mu an mm-form, and ff a 0 -form. Then we have the following identities:
{:[d(d omega)=0],[d(omega+mu)=d omega+d mu],[d(omega^^mu)=d omega^^mu+(-1)^(n)omega^^d mu],[d(fdx_(1)^^dx_(2)^^dots^^dx_(n))=df^^dx_(1)^^dx_(2)^^dots^^dx_(n)],[df=(del f)/(delx_(1))dx_(1)+(del f)/(delx_(2))dx_(2)+dots+(del f)/(delx_(n))dx_(n).]:}\begin{aligned}
d(d \omega) & =0 \\
d(\omega+\mu) & =d \omega+d \mu \\
d(\omega \wedge \mu) & =d \omega \wedge \mu+(-1)^{n} \omega \wedge d \mu \\
d\left(f d x_{1} \wedge d x_{2} \wedge \ldots \wedge d x_{n}\right) & =d f \wedge d x_{1} \wedge d x_{2} \wedge \ldots \wedge d x_{n} \\
d f & =\frac{\partial f}{\partial x_{1}} d x_{1}+\frac{\partial f}{\partial x_{2}} d x_{2}+\ldots+\frac{\partial f}{\partial x_{n}} d x_{n} .
\end{aligned}
Example 29.
{:[d(xydx-{:xydy+xy^(2)z^(3)dz)],[=d(xy)^^dx-d(xy)^^dy+d(xy^(2)z^(3))^^dz],[=(ydx+xdy)^^dx-(ydx+xdy)^^dy],[+(y^(2)z^(3)dx+2xyz^(3)dy+3xy^(2)z^(2)dz)^^dz],[=ydx^^dx+xdy^^dx-ydx^^dy-xdy^^dy],[+y^(2)z^(3)dx^^dz+2xyz^(3)dy^^dz+3xy^(2)z^(2)dz^^dz],[=xdy^^dx-ydx^^dy+y^(2)z^(3)dx^^dz+2xyz^(3)dy^^dz],[=-xdx^^dy-ydx^^dy+y^(2)z^(3)dx^^dz+2xyz^(3)dy^^dz],[=(-x-y)dx^^dy+y^(2)z^(3)dx^^dz+2xyz^(3)dy^^dz]:}\begin{aligned}
d(x y d x- & \left.x y d y+x y^{2} z^{3} d z\right) \\
= & d(x y) \wedge d x-d(x y) \wedge d y+d\left(x y^{2} z^{3}\right) \wedge d z \\
= & (y d x+x d y) \wedge d x-(y d x+x d y) \wedge d y \\
& +\left(y^{2} z^{3} d x+2 x y z^{3} d y+3 x y^{2} z^{2} d z\right) \wedge d z \\
= & y d x \wedge d x+x d y \wedge d x-y d x \wedge d y-x d y \wedge d y \\
& +y^{2} z^{3} d x \wedge d z+2 x y z^{3} d y \wedge d z+3 x y^{2} z^{2} d z \wedge d z \\
= & x d y \wedge d x-y d x \wedge d y+y^{2} z^{3} d x \wedge d z+2 x y z^{3} d y \wedge d z \\
= & -x d x \wedge d y-y d x \wedge d y+y^{2} z^{3} d x \wedge d z+2 x y z^{3} d y \wedge d z \\
= & (-x-y) d x \wedge d y+y^{2} z^{3} d x \wedge d z+2 x y z^{3} d y \wedge d z
\end{aligned}
Example 30.
{:[d(x^(2)(y:}{:+z^(2))dx^^dy+z(x^(3)+y)dy^^dz)],[=d(x^(2)(y+z^(2)))^^dx^^dy+d(z(x^(3)+y))^^dy^^dz],[=2x^(2)zdz^^dx^^dy+3x^(2)zdx^^dy^^dz],[=5x^(2)zdx^^dy^^dz]:}\begin{aligned}
d\left(x^{2}(y\right. & \left.\left.+z^{2}\right) d x \wedge d y+z\left(x^{3}+y\right) d y \wedge d z\right) \\
& =d\left(x^{2}\left(y+z^{2}\right)\right) \wedge d x \wedge d y+d\left(z\left(x^{3}+y\right)\right) \wedge d y \wedge d z \\
& =2 x^{2} z d z \wedge d x \wedge d y+3 x^{2} z d x \wedge d y \wedge d z \\
& =5 x^{2} z d x \wedge d y \wedge d z
\end{aligned}
6.11. For each differential nn-form, omega\omega, find a omegaa \omega.
sin ydx+cos xdy\sin y d x+\cos x d y.
xy^(2)dx+x^(3)zdy-(y+z^(9))dzx y^{2} d x+x^{3} z d y-\left(y+z^{9}\right) d z.
xy^(2)dy^^dz+x^(3)zdx^^dz-(y+z^(9))dx^^dyx y^{2} d y \wedge d z+x^{3} z d x \wedge d z-\left(y+z^{9}\right) d x \wedge d y.
x^(2)y^(3)z^(4)dx^^dy^^dzx^{2} y^{3} z^{4} d x \wedge d y \wedge d z.
6.12. If ff is the 0 -form x^(2)y^(3)x^{2} y^{3} and omega\omega is the 1 -form x^(2)zdx+y^(3)z^(2)dyx^{2} z d x+y^{3} z^{2} d y (on R^(3)\mathrm{R}^{3} ), then use the identity d(fa omega)=df^^a omegad(f a \omega)=d f \wedge a \omega to compute d(fa omegad(f a \omega ).
6.13. Let f,gf, g and hh be functions from R^(3)\mathrm{R}^{3} to R. If
omega=fdy^^dz-gdx^^dz+hdx^^dy\omega=f d y \wedge d z-g d x \wedge d z+h d x \wedge d y
6.5 Antiderivatives
Just as in single-variable calculus it will be helpful to have some proficiency in recognizing antiderivatives. Nothing substitutes for practice...
6.14. Find forms whose derivatives are
dx^^dyd x \wedge d y.
dx^^dy^^dzd x \wedge d y \wedge d z.
yzdx+xzdy+xydzy z d x+x z d y+x y d z.
y^(2)z^(2)dx+2xyz^(2)dy+2xy^(2)zdzy^{2} z^{2} d x+2 x y z^{2} d y+2 x y^{2} z d z.
(y^(2)-2xy)cos(xy^(2))dx^^dy\left(y^{2}-2 x y\right) \cos \left(x y^{2}\right) d x \wedge d y.
6.15. Show that omega=xy^(2)dx\omega=x y^{2} d x is not the derivative of any 0 -form. (Hint. consider aw.)
7
Stokes' Theorem
7.1 Cells and chains
Up until now, we have not been very specific as to the types of subsets of R^(m)\mathrm{R}^{m} on which one integrates a differential nn-form. All we have needed is a subset that can be parameterized by a region in R^(n)R^{n}. To go further we need to specify the types of regions.
Definition 1. Let I=[0,1]I=[0,1]. An nn-cell, sigma\sigma, is the image of a differentiable map, varphi:I^(n)rarrR^(m)\varphi: I^{n} \rightarrow \mathrm{R}^{m}, with a specified orientation. We denote the same cell with opposite orientation as -sigma-\sigma. We define a 0 cell to be an oriented point of R^(m)\mathrm{R}^{m}.
Example 31. Suppose g_(1)(x)g_{1}(x) and g_(2)(x)g_{2}(x) are functions such that g_(1)(x) <g_{1}(x)<g_(2)(x)g_{2}(x) for all x in[a,b]x \in[a, b]. Let RR denote the subset of R^(2)\mathrm{R}^{2} bounded by the graphs of the equations y=g_(1)(x)y=g_{1}(x) and y=g_(2)(x)y=g_{2}(x), and by the lines xx=a=a and x=bx=b. In Example 13, we showed that RR is a 2-cell (assuming the induced orientation).
We would like to treat cells as algebraic objects which can be added and subtracted. But if sigma\sigma is a cell, it may not at all be clear what " 2sigma2 \sigma " represents. One way to think about it is as two copies of sigma\sigma, placed right on top of each other.
Definition 2. An n-chain is a formal linear combination of n-cells.
As one would expect, we assume the following relations hold:
You may be able to guess what the integral of an nn-form, omega\omega, over an nn-chain is. Suppose C=sum nsigma_(i)C=\sum n \sigma_{i}. Then we define
7.1. If ff is the 0 -form x^(2)y^(3),px^{2} y^{3}, p is the point (-1,1),q(-1,1), q is the point ( 1 , -1 ), and rr is the point (-1,-1)(-1,-1), then compute the integral of ff over the following 0 -chains:
p-q;r-pp-q ; r-p.
p+q-rp+q-r.
Another concept that will be useful for us is the boundary of an nn chain. As a warm-up, we define the boundary of a 1 -cell. Suppose sigma\sigma is the 1 -cell which is the image of varphi:[0,1]rarrR^(m)\varphi:[0,1] \rightarrow \mathrm{R}^{m} with the induced orientation. Then we define the boundary of sigma\sigma (which we shall denote " bar(del)sigma\bar{\partial} \sigma ") as the 0 -chain, varphi(1)-varphi(0)\varphi(1)-\varphi(0). We can represent this pictorially as in Figure 7.1.
Fig. 7.1. Orienting the boundary of a 1 -cell.
Fig. 7.2. The boundary of a 2-cell.
Figure 7.2 depicts a 2 -cell and its boundary. Notice that the boundary consists of four individually oriented 1 -cells. This hints at the general formula. In general, if the nn-cell sigma\sigma is the image of the parameterization varphi:I^(n)rarrR^(m)\varphi: I^{n} \rightarrow \mathrm{R}^{m} with the induced orientation then
The four terms on the right side of this equality are the four 1cells depicted in Figure 7.2. The signs in front of these terms guarantee that the orientations are as pictured.
An example will hopefully clear up the confusion this all was sure to generate:
Fig. 7.3. Orienting the boundary of a 2-cell.
Example 32. Suppose varphi(r,theta)=(r cos Gamma theta,r sin n theta)\varphi(r, \theta)=(r \cos \Gamma \theta, r \sin n \theta). The image of varphi\varphi is the 2-cell, sigma\sigma, depicted in Figure 7.3. By the above definition,
This is the 1-chain depicted in Figure 7.3.
Finally, we are ready to define what we mean by the boundary of an nn-chain. If C=Sigman_(i)sigma_(j)C=\Sigma n_{i} \sigma_{j}, then we define del C=Sigman_(i)delsigma_(i)\partial C=\Sigma n_{i} \partial \sigma_{i}. Example 33. Suppose
{:[phi_(1)(r","theta)=(r cos 2pi theta,r sin 2pi theta,sqrt(1-r^(2)))","],[varphi_(2)(r","theta)=(-r cos 2pi theta,r sin 2pi theta,-sqrt()1-r^(2))","],[sigma_(1)=Im(varphi_(1))" and "sigma_(2)=Im(varphi_(2))". Then "sigma_(1)+sigma_(2)" is a sphere in "R^(3)." One "],[" can check that "del(sigma_(1)+sigma_(2))=]:}\begin{aligned}
& \phi_{1}(r, \theta)=\left(r \cos 2 \pi \theta, r \sin 2 \pi \theta, \sqrt{1-r^{2}}\right), \\
& \varphi_{2}(r, \theta)=\left(-r \cos 2 \pi \theta, r \sin 2 \pi \theta,-\sqrt{ } 1-r^{2}\right), \\
& \sigma_{1}=\operatorname{Im}\left(\varphi_{1}\right) \text { and } \sigma_{2}=\operatorname{Im}\left(\varphi_{2}\right) \text {. Then } \sigma_{1}+\sigma_{2} \text { is a sphere in } \mathrm{R}^{3} . \text { One } \\
& \text { can check that } \partial\left(\sigma_{1}+\sigma_{2}\right)=
\end{aligned}
7.2. If sigma\sigma is an nn-cell, show that del del sigma=O/\partial \partial \sigma=\varnothing. (At least show this if sigma\sigma is a 2 -cell and a 3 -cell. The 2 -cell case can be deduced pictorially from Figures 7.1 and 7.2.)
7.3. If sigma\sigma is given by the parameterization
phi(r,theta)=(r cos theta,r sin theta)\phi(r, \theta)=(r \cos \theta, r \sin \theta)
for 0 <= r <= 10 \leq r \leq 1 and 0 <= theta <= 40 \leq \theta \leq 4, then what is del sigma\partial \sigma ?
7.4. If sigma\sigma is given by the parameterization
phi(r,theta)=(r cos theta,r sin theta,r)\phi(r, \theta)=(r \cos \theta, r \sin \theta, r)
for 0 <= r <= 10 \leq r \leq 1 and 0 <= theta <= 2pi0 \leq \theta \leq 2 \pi, then what is del sigma\partial \sigma ?
7.2 The generalized Stokes' Theorem
In calculus, we learn that when you take a function, differentiate it, and then integrate the result, something special happens. In this section, we explore what happens when we take a form, differentiate it, and then integrate the resulting form over some chain. The general argument is quite complicated, so we start by looking at forms of a particular type integrated over very special regions.
Suppose omega=adx_(2)^^dx_(3)\omega=a d x_{2} \wedge d x_{3} is a 2-form on R^(3)R^{3}, where a:R^(3)rarr Ra: R^{3} \rightarrow R. Let RR be the unit cube, P^(3)subR^(3)P^{3} \subset R^{3}. We would like to explore what happens when we integrate omega omega\omega \omega over RR. Note first that Problem 6.8 implies that ^(d omega)=(del a)/(delx_(1))dx_(1)^^dx_(2)^^dx_(3){ }^{d \omega}=\frac{\partial a}{\partial x_{1}} d x_{1} \wedge d x_{2} \wedge d x_{3}.
Recall the steps used to define int Raw\int R a w :
. Choose a lattice of points in R,{p_(i,j,k)}R,\left\{p_{i, j, k}\right\}. Since RR is a cube, we can choose this lattice to be rectangular.
Take the limit as the maximal distance between adjacent lattice points goes to zero.
Let's focus on Step 3 for a moment. Let tt be the distance between p_(i+1,j,k)p_{i+1, j, k} and p_(i,j,k')p_{i, j, k \prime} and assume tt is small. Then (del a)/(delx_(1))(p_(i,j,k))_("is ")\frac{\partial a}{\partial x_{1}}\left(p_{i, j, k}\right)_{\text {is }} approximately equal to (a(p_(i+1,j,k))-a(p_(i,j,k)))/(t)\frac{a\left(p_{i+1, j, k}\right)-a\left(p_{i, j, k}\right)}{t}. This approximation gets better and better when we let t rarr0t \rightarrow 0, in Step 5 .
The vectors, V^(1)_(i,j,k)V^{1}{ }_{i, j, k} through V_(i,j,k)^(3)V_{i, j, k}{ }^{3}, form a little cube. If we say the vector V_(i,j,k)^(1)V_{i, j, k}{ }^{1} is "vertical," and the other two are horizontal, then the "height" of this cube is tt, and the area of its base is dx_(2)^^dx_(3)(V_(i,j,k)^(2):}d x_{2} \wedge d x_{3}\left(V_{i, j, k}{ }^{2}\right.{:V^(3)_(i,j,k))\left.V^{3}{ }_{i, j, k}\right), which makes its volume tdx_(2)^^dx_(3)(V_(i,j,k)^(2)V_(i,j,k)^(3))t d x_{2} \wedge d x_{3}\left(V_{i, j, k}^{2} V_{i, j, k}^{3}\right). Putting all this together, we find that
{:[domega_(p_(i,j,k))(V_(i,j,k)^(1),V_(i,j,k)^(2),V_(i,j,k)^(2))=(del a)/(delx_(1))dx_(1)^^dx_(2)^^dx_(3)(V_(i,j,k)^(1),V_(i,j,k)^(2),V_(i,j,k)^(2))],[~~(a(p_(i+1,j,k))-a(p_(i,j,k)))/(t)tdx_(2)^^dx_(3)(V_(i,j,k)^(2),V_(i,j,k)^(3))],[=omega(V_(i+1,j,k)^(2),V_(i+1,j,k)^(3))-omega(V_(i,j,k)^(2),V_(i,j,k)^(3))]:}\begin{aligned}
d \omega_{p_{i, j, k}}\left(V_{i, j, k}^{1}, V_{i, j, k}^{2}, V_{i, j, k}^{2}\right) & =\frac{\partial a}{\partial x_{1}} d x_{1} \wedge d x_{2} \wedge d x_{3}\left(V_{i, j, k}^{1}, V_{i, j, k}^{2}, V_{i, j, k}^{2}\right) \\
& \approx \frac{a\left(p_{i+1, j, k}\right)-a\left(p_{i, j, k}\right)}{t} t d x_{2} \wedge d x_{3}\left(V_{i, j, k}^{2}, V_{i, j, k}^{3}\right) \\
& =\omega\left(V_{i+1, j, k}^{2}, V_{i+1, j, k}^{3}\right)-\omega\left(V_{i, j, k}^{2}, V_{i, j, k}^{3}\right)
\end{aligned}
Let's move on to Step 4. Here we sum over all i,ji, j and kk. Suppose for the moment that ii ranges between 1 and NN. First, we fix jj and kk, and sum over all ii. The result is omega(V_(N,j,k)^(2)V_(N,j,k))-omega(V_(1,j,k)^(2)V_(1,j,k))\omega\left(V_{N, j, k}^{2} V_{N, j, k}\right)-\omega\left(V_{1, j, k}^{2} V_{1, j, k}\right). Now notice that sumj,komega(V^(2)_(N,j,k),V_(N,j,k)^(3))\sum \mathrm{j}, \mathrm{k} \omega\left(V^{2}{ }_{N, j, k}, V_{N, j, k}^{3}\right) is a Riemann sum for the integral of omega\omega over the "top" of RR, and sum_(j,k)omega(V_(1,j,k)^(2),V_(1,j,k)^(3))\sum_{j, k} \omega\left(V_{1, j, k}^{2}, V_{1, j, k}^{3}\right) is a Riemann sum for omega\omega over the "bottom" of RR. Lastly, not that omega\omega, evaluated on any pair of vectors which lie in the sides of the cube, gives zero. Hence, the integral of omega\omega over a side of RR is zero. Putting all this together, we conclude:
7.5. Prove that Equation 7.1 holds if omega=bdx_(1)^^dx_(3)\omega=b d x_{1} \wedge d x_{3}, or if omega=c\omega=cdx_(1)^^dx_(2)d x_{1} \wedge d x_{2}. Caution! Beware of signs and orientations.
7.6. Use the previous problem to conclude that if omega=adx_(2)^^dx_(3)+\omega=a d x_{2} \wedge d x_{3}+bdx_(1)^^dx_(3)+cdx_(1)^^dx_(2)b d x_{1} \wedge d x_{3}+c d x_{1} \wedge d x_{2} is an arbitrary 2-form on R^(3)\mathrm{R}^{3}, then Equation 7.1 holds.
7.7. If omega\omega is an arbitrary ( n-1n-1 )-form on R^(n)\mathrm{R}^{n} and RR is the unit cube in R^(n)\mathrm{R}^{n}, then show that Equation 7.1 still holds.
In general, if C=sum nsigma_(i)C=\sum n \sigma_{i} is an nn-chain, then
This equation is called the generalized Stokes' Theorem. It is unquestionably the most crucial result of this text. In fact, everything we have done up to this point has been geared toward developing this equation and everything that follows will be applications of this equation. Technically, we have only established this theorem when integrating over cubes and their boundaries. We postpone the general proof to Section 9.1.
Example 34. Let omega=xdy\omega=x d y be a 1 -form on R^(2)\mathrm{R}^{2}. Let sigma\sigma be the 2 -cell which is the image of the parameterization varphi(r,theta)=(r cos theta,r sin theta)\varphi(r, \theta)=(r \cos \theta, r \sin \theta), where 0 <= r <= R0 \leq r \leq R and 0 <= theta <= 2pi0 \leq \theta \leq 2 \pi. By the generalized Stokes' Theorem,
int_(del sigma)omega=int_(sigma)d omega=int_(sigma)dx^^dy=int_(sigma)dxdy=Area(sigma)=piR^(2).\int_{\partial \sigma} \omega=\int_{\sigma} d \omega=\int_{\sigma} d x \wedge d y=\int_{\sigma} d x d y=\operatorname{Area}(\sigma)=\pi R^{2} .
7.8. Verify directly that int del sigma omega=piR^(2)\int \partial \sigma \omega=\pi R^{2}.
Example 35. Let omega=xdy+ydx\omega=x d y+y d x be a 1-form on R^(2)\mathrm{R}^{2}, and let sigma\sigma be any 2 -cell. Then int del sigma omega=int sigma a omega=0\int \partial \sigma \omega=\int \sigma a \omega=0.
7.9. Find a 1-chain in R^(2)R^{2}, which bounds a 2-cell, and integrate the form xdy+ydxx d y+y d x over this curve.
7.10. Let omega\omega be a differential ( n-1n-1 )-form and sigma\sigma a ( n+1n+1 )-cell. Use the generalized Stokes' Theorem in two different ways to show int d\int domega=0. hat(sigma)sigma\omega=0 . \hat{\sigma} \sigma
Example 36. Let CC be the curve in R^(2)\mathrm{R}^{2} parameterized by varphi(t)=(t^(2):}\varphi(t)=\left(t^{2}\right., t^(3)t^{3} ), where -1 <= t <= 1-1 \leq t \leq 1. Let ff be the 0 -form x^(2)yx^{2} y. We use the generalized Stokes' Theorem to calculate SCdf.
The curve CC goes from the point (1,-1)(1,-1), when t=-1t=-1, to the point (1,1)(1,1), when t=1t=1. Hence, del C\partial C is the 0 -chain (1,1)-(1,-1)(1,1)-(1,-1). Now we use Stokes:
int_(C)df=int_(del C)f=int_((1,1)-(1,-1))x^(2)y=1-(-1)=2.\int_{C} d f=\int_{\partial C} f=\int_{(1,1)-(1,-1)} x^{2} y=1-(-1)=2 .
7.11. Calculate intCdf\int \mathrm{C} d f directly.
7.12. Let CC be any curve in R^(3)\mathrm{R}^{3} from (0,0,0)(0,0,0) to (1,1,1)(1,1,1). Let omega=\omega=y^(2)z^(2)dx+2xyz^(2)dy+2xy^(2)zdzy^{2} z^{2} d x+2 x y z^{2} d y+2 x y^{2} z d z. Calculate intComega\int \mathrm{C} \omega.
Example 37. Let omega=(x^(2)+y)dx+(x-y^(2))dy\omega=\left(x^{2}+y\right) d x+\left(x-y^{2}\right) d y be a 1 -form on R^(2)\mathrm{R}^{2}. We wish to integrate omega\omega over sigma\sigma, the top half of the unit circle, oriented clockwise. First, note that a omega=0a \omega=0, so that if we integrate omega\omega over the boundary of any 2 -cell, we would get zero. Let T denote the line segment connecting (-1,0)(-1,0) to ( 1,0 ). Then the 1 -chain sigma-T\sigma-T bounds a 2 -cell. So int sigma-T omega=0\int \sigma-T \omega=0, which implies that int sigma omega=int T omega\int \sigma \omega=\int T \omega. This latter integral is a bit easier to com ute. Let varphi(t)=(t,0)\varphi(t)=(t, 0) be a parame erization of T , where -1 <= t <= 1-1 \leq t \leq 1. Then
int_(sigma)omega=int_(tau)omega=int_([-1,1])omega_((t,0))((:1,0:))dt=int_(-1)^(1)t^(2)dt=(2)/(3)\int_{\sigma} \omega=\int_{\tau} \omega=\int_{[-1,1]} \omega_{(t, 0)}(\langle 1,0\rangle) d t=\int_{-1}^{1} t^{2} d t=\frac{2}{3}
7.13. Let omega=-y^(2)dx+x^(2)dy\omega=-y^{2} d x+x^{2} d y. Let sigma\sigma be the 2 -cell in R^(2)\mathrm{R}^{2} parameterized by the following:
phi(u,v)=(2u-v,u+v),1 <= u <= 2,0 <= v <= 1.\phi(u, v)=(2 u-v, u+v), 1 \leq u \leq 2,0 \leq v \leq 1 .
Calculate int omega.delta sigma\int \omega . \delta \sigma
7.14. Let omega=dx-ln xdy\omega=d x-\ln x d y. Let sigma\sigma be the 2-cell parameterized by the following:
phi(u,v)=(uv^(2),u^(3)v),1 <= u <= 2,1 <= v <= 2.\phi(u, v)=\left(u v^{2}, u^{3} v\right), 1 \leq u \leq 2,1 \leq v \leq 2 .
Calculate: int omega\int \omega. bar(sigma)sigma\bar{\sigma} \sigma
7.15. Let sigma\sigma be the 2-cell given by the following parameterization:
phi(r,theta)=(r cos theta,r sin theta),0 <= r <= 1,0 <= theta <= pi.\phi(r, \theta)=(r \cos \theta, r \sin \theta), 0 \leq r \leq 1,0 \leq \theta \leq \pi .
Suppose omega=x^(2)dx+e^(y)dy\omega=x^{2} d x+e^{y} d y.
Calculate int sigma a omega\int \sigma a \omega directly.
Let C_(1)C_{1} be the horizontal segment connecting (-1,0)(-1,0) to (0,0)(0,0), and C_(2)C_{2} be the horizontal segment connecting (0,0)(0,0) to (1,0)(1,0). Calculate intC_(1)omega\int \mathrm{C}_{1} \omega and intC_(1)omega\int \mathrm{C}_{1} \omega directly.
Use your previous answers to determine the integral of omega\omega over the top half of the unit circle (oriented counterclockwise).
7.16. Let omega=(x+y^(3))dx+3xy^(2)dy\omega=\left(x+y^{3}\right) d x+3 x y^{2} d y be a differential 1-form on R^(2)\mathrm{R}^{2}. Let QQ be the rectangle {(x,y)∣0 <= x <= 3,0 <= y <= 2}\{(x, y) \mid 0 \leq x \leq 3,0 \leq y \leq 2\}.
Compute aba b.
Use the generalized Stokes' Theorem to compute del\partial.
Compute del\partial directly, by integrating omega\omega over each each edge of the boundary del Q\partial Q of the rect ngle, and then adding in the appropriate manner.
How does int_(-T-L)omega\int_{-T-L} \omega compare to int_(B)omega\int_{B} \omega ?
Let SS be any curve in the upper half plane (i.e., the set {(x\{(x, y)∣y >= 0})y) \mid y \geq 0\}) that goes from the point (3,0)(3,0) to the point (0,0)(0,0). What iss ? Why?
Let SS be any curve that goes from the point (3,0)(3,0) to the point (0,0)(0,0). What iss ^(omega){ }^{\omega} ? WHY?
7.17. Let omega\omega be the following 2 -form on R^(3)\mathbb{R}^{3} :
omega=(x^(2)+y^(2))dy^^dz+(x^(2)-y^(2))dx^^dz.\omega=\left(x^{2}+y^{2}\right) d y \wedge d z+\left(x^{2}-y^{2}\right) d x \wedge d z .
Let VV be the region of R^(3)\mathrm{R}^{3} bounded by the graph of y=sqrt()1-x^(2)y=\sqrt{ } 1-x^{2}, the planes z=0z=0 and z=2z=2, and the xzx z-plane (see Figure 7.4).
Fig. 7.4. The region VV of Problem 7.17.
Parameterize VV using cylindrical coordinates.
Determine a omegaa \omega.
Calculatev.
The sides fV\mathrm{f} V are parameterized as follows:
a. Bottom: varphi_(B)(r,theta)=(r cos theta,r sin theta,0)\varphi_{B}(r, \theta)=(r \cos \theta, r \sin \theta, 0), where 0 <= r0 \leq r<= 1\leq 1 and 0 <= theta <= pi0 \leq \theta \leq \pi.
b. Top: varphi_(T)(r,theta)=(r cos theta,r sin theta,2)\varphi_{T}(r, \theta)=(r \cos \theta, r \sin \theta, 2), where 0 <= r <=0 \leq r \leq 1 and 0 <= theta <= pi0 \leq \theta \leq \pi.
c. Flat side: varphi_(F)(x,z)=(x,0,z)\varphi_{F}(x, z)=(x, 0, z), where -1 <= x <= 1-1 \leq x \leq 1 and 0 <= z <= 2\leq z \leq 2.
d. Curved side: varphi_(C)(theta,z)=(cos theta,sin theta,z)\varphi_{C}(\theta, z)=(\cos \theta, \sin \theta, z), where 0 <=0 \leqtheta <= pi\theta \leq \pi and 0 <= z <= 20 \leq z \leq 2.
Calculate the integral of omega\omega over the top, bottom and flat side. (Do not calculate this integral over the curved side.)
5. If CC is the curved side of del V\partial V, use your answers to the previous questions to determine CC
7.18. Calculate the volume of a ball of radius one, {(rho,theta,varphi)∣rho <=\{(\rho, \theta, \varphi) \mid \rho \leq1}1\}, by integrating some 2 -form over the sphere of radius one, {(rho\{(\rho, theta,varphi)∣rho=1}\theta, \varphi) \mid \rho=1\}.
7.19. Calculate where CC is the circle of radius two, centered about the origin.
int_(C)x^(3)dx+((1)/(3)x^(3)+xy^(2))dy\int_{C} x^{3} d x+\left(\frac{1}{3} x^{3}+x y^{2}\right) d y
7.20. Suppose omega=xdx+xdy\omega=x d x+x d y is a 1 -form on R^(2)\mathrm{R}^{2}. Let CC be the ellipse (x^(2))/(4)+(y^(2))/(9)=1\frac{x^{2}}{4}+\frac{y^{2}}{9}=1. Determine the value of CC by integrating some 2 -form over the region bounded by the ellipse.
7.21. Let omega=-y^(2)dx+x^(2)dy\omega=-y^{2} d x+x^{2} d y. Let sigma\sigma be the 2 -cell in R^(2)\mathrm{R}^{2} parameterized by the following:
phi(r,theta)=(r cosh theta,r sinh theta)\phi(r, \theta)=(r \cosh \theta, r \sinh \theta)
where 0 <= r <= 10 \leq r \leq 1 and -1 <= theta <= 1-1 \leq \theta \leq 1. Calculate int omega\int \omega. del sigma\partial \sigma
7.22. Suppose omega\omega is a 1 -form on R^(2)R^{2} such that a omega=0a \omega=0. Let C_(1)C_{1} and C_(2)C_{2} be the 1 -cells given by the following parameterizations:
{:[C_(1):phi(t)=(t","0)","2pi <= t <= 6pi],[C_(2):psi(t)=(t cos t","t sin t)","2pi <= t <= 6pi]:}\begin{aligned}
& C_{1}: \phi(t)=(t, 0), 2 \pi \leq t \leq 6 \pi \\
& C_{2}: \psi(t)=(t \cos t, t \sin t), 2 \pi \leq t \leq 6 \pi
\end{aligned}
int_("Show that "C_(1))omega=int_(C_(2))omega.C_(1)C_(2)\int_{\text {Show that } C_{1}} \omega=\int_{C_{2}} \omega . C_{1} C_{2}
(Cautio : Beware of orientations!)
7.3 Vector calculus and the many faces of the generalized Stokes' Theorem
Although the language and notation may be new, you have already seen the generalized Stokes' Theorem in many guises. For example, let f(x)f(x) be a 0 -form on
R. Then df=f^(')(x)dxd f=f^{\prime}(x) d x. Let [a,b][a, b] be a 1-1- cell in RR. Then the generalized Stokes' Theorem tells us which is, of course, the "Fundamental Theorem of Calculus." If we let RR be some 2 -chain in R^(2)\mathrm{R}^{2} then the generalized Stokes' Theorem implies
{:[int_(a)^(b)f^(')(x)dx=int_([a,b])f^(')(x)dx=int_(del[a,b])f(x)=int_(b-a)f(x)=f(b)-f(a)","],[int_(del R)Pdx+Qdy=int_(R)d(Pdx+Qdy)=int_(R)((del Q)/(del x)-(del P)/(del y))dxdy.]:}\begin{aligned}
& \int_{a}^{b} f^{\prime}(x) d x=\int_{[a, b]} f^{\prime}(x) d x=\int_{\partial[a, b]} f(x)=\int_{b-a} f(x)=f(b)-f(a), \\
& \int_{\partial R} P d x+Q d y=\int_{R} d(P d x+Q d y)=\int_{R}\left(\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}\right) d x d y .
\end{aligned}
This is what we call "Green's theorem" in calculus. To proceed further, we restrict ourselves to R^(3)R^{3}. In this dimension, there is a nice correspondence between vector fields and both 1-and 2-forms.
{:[F=(:F_(x),F_(y),F_(z):)harromega_(F)^(1)],[=F_(x)dx+F_(y)dy+F_(z)dz],[ harromega_(F)^(2)]:}=F_(x)dy^^dz-F_(y)dx^^dz+F_(z)dx^^dy.\begin{aligned}
& \mathbf{F}=\left\langle F_{x}, F_{y}, F_{z}\right\rangle \leftrightarrow \omega_{\mathbf{F}}^{1} \\
&=F_{x} d x+F_{y} d y+F_{z} d z \\
& \leftrightarrow \omega_{\mathbf{F}}^{2}
\end{aligned}=F_{x} d y \wedge d z-F_{y} d x \wedge d z+F_{z} d x \wedge d y .
On R^(3)R^{3} there is also a useful correspondence between 0-forms (functions) and 3-forms.
f(x,y,z)harromega_(f)^(3)=fdx^^dy^^dz.f(x, y, z) \leftrightarrow \omega_{f}^{3}=f d x \wedge d y \wedge d z .
We can use these correspondences to define various operations involving functions and vector fields. For example, suppose f:^(3)rarrf:{ }^{3} \rightarrow1R1 R is a 0 -form. Then dfd f is the 1 -form, (del f)/(del x)dx+(del f)/(del y)dy+(del f)/(del z)dz\frac{\partial f}{\partial x} d x+\frac{\partial f}{\partial y} d y+\frac{\partial f}{\partial z} d z. The vector field associated to this 1 -form is then ^((:(del f)/(del x),(del f)/(del y),(del f)/(del z):)". In calculus we call "){ }^{\left\langle\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z}\right\rangle \text {. In calculus we call }} this vector field grad ff, or grad f\nabla f. In other words, grad f\nabla f is the vector field associated with the 1-form, dfd f. This can be summarized by the equation
df=omega_(grad ff)^(1)d f=\omega_{\nabla f f}^{1}
It will be useful to think of this as a diagram as well.
Example 38. Suppose f=x^(2)y^(3)zf=x^{2} y^{3} z. Then df=2xy^(3)zdx+3x^(2)y^(2)zdyd f=2 x y^{3} z d x+3 x^{2} y^{2} z d y+x^(3)y^(3)dz+x^{3} y^{3} d z. The associated vector field, grad f\operatorname{grad} f, is then grad f=(:2xy^(3)z:}\nabla f=\left\langle 2 x y^{3} z\right., {:3x^(2)y^(2)z,x^(3)y^(3):)\left.3 x^{2} y^{2} z, x^{3} y^{3}\right\rangle.
Similarly, if we start with a vector field, F\mathbf{F}, form the associated 1form, omega_(F)\omega_{\mathbf{F}}, differentiate it, and look at the corresponding vector field, then the result is called curl F\mathbf{F}, or grad xxF\nabla \times \mathbf{F}. So, grad xxF\nabla \times \mathbf{F} is the vector field
associated with the 2 -form, domega_(F)^(1)d \omega_{\mathbf{F}}^{1}. This can be summarized by the equation
This can also be illustrated by the following diagram.
Example 39. Let F=(:xy,yz,x^(2):)\mathbf{F}=\left\langle x y, y z, x^{2}\right\rangle. The associated 1-form is then
omega_(F)^(1)=xydx+yzdy+x^(2)dz.\omega_{\mathbf{F}}^{1}=x y d x+y z d y+x^{2} d z .
The derivative of this 1 -form is the 2 -form
domega_(F)^(1)=-ydy^^dz+2xdx^^dz-xdx^^dy.d \omega_{\mathrm{F}}^{1}=-y d y \wedge d z+2 x d x \wedge d z-x d x \wedge d y .
The vector field associated to this 2-form is curl F, which is
grad xxF=(:-y,-2x,-x:).\nabla \times \mathbf{F}=\langle-y,-2 x,-x\rangle .
Lastly, we can start with a vector field, F=(:F_(x,)F_(y,)F_(z):)\mathbf{F}=\left\langle F_{x,} F_{y,} F_{z}\right\rangle, and then look at the 3-form, domega_(F)^(2)=((delF_(x))/(del x)+(delF_(y))/(del y)+(delF_(z))/(del z))dx^^dy^^dz_((see )d \omega_{\mathbf{F}}^{2}=\left(\frac{\partial F_{x}}{\partial x}+\frac{\partial F_{y}}{\partial y}+\frac{\partial F_{z}}{\partial z}\right) d x \wedge d y \wedge d z_{\text {(see }}
Problem 6.13). The function, (delF_(x))/(del x)+(delF_(y))/(del y)+(delF_(z))/(del z)\frac{\partial F_{x}}{\partial x}+\frac{\partial F_{y}}{\partial y}+\frac{\partial F_{z}}{\partial z} is called div F\mathbf{F}, or grad\nabla. F\mathbf{F}. This is summarized in the following equation and diagram.
{:[domega_(F)^(2)=omega_(V*F)^(3)],[Frarr"" div ""grad*F],[darr],[omega_(F)^(2)rarr_("d")^("longrightarrow")domega_(F)^(2)]:}\begin{gathered}
d \omega_{\mathbf{F}}^{2}=\omega_{\mathrm{V} \cdot \mathbf{F}}^{3} \\
\mathbf{F} \xrightarrow{\text { div }} \nabla \cdot \mathbf{F} \\
\downarrow \\
\omega_{\mathbf{F}}^{2} \xrightarrow[d]{\longrightarrow} d \omega_{\mathbf{F}}^{2}
\end{gathered}
Example 40. Let F=(:xy,yz,x^(2):)\mathbf{F}=\left\langle x y, y z, x^{2}\right\rangle. The associated 2-form is then
omega_(F)^(2)=xydy^^dz-yzdx^^dz+x^(2)dx^^dy.\omega_{\mathrm{F}}^{2}=x y d y \wedge d z-y z d x \wedge d z+x^{2} d x \wedge d y .
The derivative is the 3 -form
domega_(F)^(2)=(y+z)dx^^dy^^dz.d \omega_{\mathrm{F}}^{2}=(y+z) d x \wedge d y \wedge d z .
So divF\operatorname{div} \mathbf{F} is the function grad*F=y+z\nabla \cdot \mathbf{F}=y+z.
Two important vector identities follow from the fact that for a differential form, omega\omega, calculating d(a omega)d(a \omega) always yields zero (see Problem 6.9). For the first identity, consider the following diagram.
This shows that if ff is a 0 -form, then the vector field corresponding to ddfd d f is grad xx(grad f)\nabla \times(\nabla f). But ddf=0d d f=0, so we conclude
grad xx(grad f)=0\nabla \times(\nabla f)=0
For the second identity, consider the diagram
This shows that if ddomega_("Fis ")^(1)d d \omega_{\text {Fis }}^{1} written as gdx^^dy^^dzg d x \wedge d y \wedge d z, then the function gg is equal to grad*(grad xxF)\nabla \cdot(\nabla \times \mathbf{F}). But ddomega^(1)=0d d \omega^{\mathbf{1}}=0, so we conclude
In vector calculus we also learn how to integrate vector fields over parameterized curves (1-chains) and surfaces (2-chains). Suppose first that sigma\sigma is some parameterized curve. Then we can integrate the component of F\mathbf{F} which points in the direction of the tangent vectors to sigma\sigma. This integral is usually denoted by int_(sigma)F*ds\int_{\sigma} \mathbf{F} \cdot d \mathbf{s}, and its definition is precisely the same as the definition we learned here for intomega_(F)^(1)\int \omega_{\mathbf{F}}^{1} precisely the same as the definition we learned here for sigma\sigma. A special case of this integral arises when F=grad f\mathbf{F}=\nabla f, for some function, ff . In this case, omega_("Fis just ")^(1)df\omega_{\text {Fis just }}^{1} d f, so the definition of int_(sigma)grad f.ds\int_{\sigma} \nabla f . d \mathbf{s} is the same int_(assigma)df\int_{\mathrm{as} \sigma} d f.
7.23. Let CC be any curve in R^(3)R^{3} from (0,0,0)(0,0,0) to (1,1,1)(1,1,1). Let FF be the vector field (:yz,xz,xy:)\langle y z, x z, x y\rangle. Show *that CC F *d\cdot d s does not depend on C.
We also learn to integrate vector fields over parameterized surfaces. In this case, the quantity we integrate is the component of the vector field which is normal to the surface. This integral is often denoted bys F*dS\mathbf{F} \cdot d \mathbf{S}. Its definition is precisely the same as that of int_(s)\int_{s} (see Problems 4.23 and 4.24). A special case of this is when F=grad\mathbf{F}=\nablaxxG\times \mathbf{G}, for some vector field, G\mathbf{G}. Then omega_("Gis ")^(2)\omega_{\text {Gis }}^{2} just ^(domega_(G)){ }^{d \omega_{\mathbf{G}}}, so we see that int_(S)(grad xxG)*dS\int_{S}(\nabla \times \mathbf{G}) \cdot d \mathbf{S} must be the same ass int_(G)domega_(G)^(1)\int_{\mathbf{G}} d \omega_{\mathbf{G}}^{1}.
The most basic thing to integrate ver a 3 -dimensional region (i.e., a 3-chain), Omega\Omega, in R^(3)\mathrm{R}^{3} is a function f(x,y,x)f(x, y, x). In calculus we denote this integral as int fdV\int f d V. Note that this is precisely the same ass ^(f){ }^{f}. A special case is when f=grad*Ff=\nabla \cdot \mathbf{F}, for some vector field F\mathbf{F}. In this case intomega_(F)^(2)^(˙)\int \dot{\omega_{\mathbf{F}}^{2}}
differential forms as Omega\Omega
We summarize the equivalence between the integrals developed in vector calculus and various integrals of differential forms in Table 7.1.
Table 7.1. The equivalence between the integrals of vector calculus and differential forms.
Let us now apply the generalized Stokes' Theorem to various situations. First, we start with a parameterization, varphi:[a,b]rarr sigma sub\varphi:[a, b] \rightarrow \sigma \subsetR^(3)\mathrm{R}^{3}, of a curve in R^(3)\mathrm{R}^{3}, and a function, f:R^(3)rarrRf: \mathrm{R}^{3} \rightarrow \mathrm{R}. Then we have
int_(sigma)grad f*ds-=int_(sigma)df=int_(del sigma)f=f(phi(b))-f(phi(a)).\int_{\sigma} \nabla f \cdot d \mathbf{s} \equiv \int_{\sigma} d f=\int_{\partial \sigma} f=f(\phi(b))-f(\phi(a)) .
This shows the independence of path of line integrals of gradient fields. We can use this to prove that a line integral of a gradient field over any simple closed curve is 0 , but for us there is an easier, direct proof, which again uses the generalized Stokes' Theorem. Suppose sigma\sigma is a simple closed loop in R^(3)R^{3} (i.e., del sigma=varphi\partial \sigma=\varphi ). Then sigma=del D\sigma=\partial D, for some 2-chain, DD. We now have
int_(sigma)grad f*ds-=int_(sigma)df=int_(D)ddf=0.\int_{\sigma} \nabla f \cdot d \mathbf{s} \equiv \int_{\sigma} d f=\int_{D} d d f=0 .
Now, suppose we have a vector field, F, and a parameterized surface, SS. Yet another application of the generalized Stokes' Theorem yields
int_(del S)F*ds-=int_(del S)omega_(F)^(1)=int_(S)domega_(F)^(1)-=int_(S)(grad xxF)*dS.\int_{\partial S} \mathbf{F} \cdot d \mathbf{s} \equiv \int_{\partial S} \omega_{\mathbf{F}}^{1}=\int_{S} d \omega_{\mathbf{F}}^{1} \equiv \int_{S}(\nabla \times \mathbf{F}) \cdot d \mathbf{S} .
In vector calculus we call this equality "Stokes' theorem." In some sense, grad xxF\nabla \times \mathbf{F} measures the "twisting" of F\mathbf{F} at points of SS. So Stokes' theorem says that the net twisting of F\mathbf{F} over all of SS is the same as the amount F\mathbf{F} circulates around del S\partial S.
Example 41. Suppose we are faced with a problem phrased as: "Use Stokes' theorem to calculate CF*dC \mathbf{F} \cdot \mathbf{d}, where CC is the curve of intersection of the cylinder x^(2)+y^(2)=1x^{2}+y^{2}=1 and the plane z=x+1z=x+1, and F\mathbf{F} is the vector field (:-x^(2)y,xy^(2),z^(3):)."\left\langle-x^{2} y, x y^{2}, z^{3}\right\rangle . "
We will solve this problem by translating to the language of differential forms, and using the generalized Stokes' Theorem, instead. To begin, note that F*ds=\mathbf{F} \cdot d \mathbf{s}=
int_(c)omega_(F)^(1)," and "omega_(F)^(1)=-x^(2)ydx+xy^(2)dy+z^(3)dz.\int_{c} \omega_{\mathbf{F}}^{1}, \text { and } \omega_{\mathbf{F}}^{1}=-x^{2} y d x+x y^{2} d y+z^{3} d z .
Now, to use the generalized Stokes' Theorem we will need to calculate
domega_(F)^(1)=(x^(2)+y^(2))dx^^dyd \omega_{\mathbf{F}}^{1}=\left(x^{2}+y^{2}\right) d x \wedge d y
Let DD denote the subset of the plane z=x+1z=x+1 bounded by CC. Then del D=C\partial D=C. Hence, by the generalized Stokes' Theorem we have
int_(C)omega_(F)^(1)=int_(D)domega_(F)^(1)=int_(D)(x^(2)+y^(2))dx^^dy\int_{C} \omega_{\mathbf{F}}^{1}=\int_{D} d \omega_{\mathbf{F}}^{1}=\int_{D}\left(x^{2}+y^{2}\right) d x \wedge d y
The region DD is parameterized by Psi(r,theta)=(r cos theta,r sin theta,r\Psi(r, \theta)=(r \cos \theta, r \sin \theta, rcos theta+1\cos \theta+1 ), where 0 <= r <= 10 \leq r \leq 1 and 0 <= theta <= 2pi0 \leq \theta \leq 2 \pi. Using this one can int(x^(2)+y^(2))dx^^dy=(pi)/(2)\int\left(x^{2}+y^{2}\right) d x \wedge d y=\frac{\pi}{2}. (and should!) show that DD
7.24. Let CC be the square with sides ( x,+-1,1x, \pm 1,1 ), where -1 <= x <= 1-1 \leq x \leq 1 and ( +-1,y,1\pm 1, y, 1 ), where -1 <= y <= 1-1 \leq y \leq 1, with the indicated orientation (see Figure 7.5). Let F\mathbf{F} be the vector field (:xy,x^(2),y^(2)z:)\left\langle x y, x^{2}, y^{2} z\right\rangle. Computec F*ds\mathbf{F} \cdot d \mathbf{s}.
Suppose now that Omega\Omega is some volume in R^(3)\mathrm{R}^{3}. Then we have
Fig. 7.5.
This last equality is called "Gauss' Divergence Theorem." grad*F\nabla \cdot \mathbf{F} is a measure of how much F\mathbf{F} "spreads out" at a point. So Gauss' theorem says that the total spreading out of F\mathbf{F} inside Omega\Omega is the same as the net amount of F\mathbf{F} "escaping" through del Omega\partial \Omega.
7.25. Let Omega\Omega be the cube {(x,y,z)∣0 <= x,y,z <= 1}\{(x, y, z) \mid 0 \leq x, y, z \leq 1\}. Let F\mathbf{F} be the vector field (:xy^(2),y^(3),x^(2)y^(2):)\left\langle x y^{2}, y^{3}, x^{2} y^{2}\right\rangle. Compute del OmegaF*dS\partial \Omega \mathbf{F} \cdot d \mathbf{S}. del Omega\partial \Omega
8
Applications
8.1 Maxwell's equations
As a brief application, we show how the language of differential forms can greatly simplify the classical vector equations of Maxwell. Much of this material is taken from [MTW73], where the interested student can find many more applications of differential forms to physics.
Maxwell's equations describe the relationship between electric and magnetic fields. Classically, both electricity and magnetism are described as a 3-dimensional vector field which varies with time:
where E_(x),E_(z)E_(z)B_(x),B_(y)E_{x}, E_{z} E_{z} B_{x}, B_{y}, and B_(z)B_{z} are all functions of x,y,zx, y, z and tt. Maxwell's equations are then:
The quantity rho\rho is called the charge density and the vector J=(:J_(x):}\mathbf{J}=\left\langle J_{x}\right.{:J_(y^('))J_(z):)\left.J_{y^{\prime}} J_{z}\right\rangle is called the current density.
We can make all of this look much simpler by making the following definitions. First, we define a 2 -form called the Faraday, which
simultaneously describes both the electric and magnetic fields:
{:[F=E_(x)dx^^dt+E_(y)dy^^dt+E_(z)dz^^dt],[+B_(x)dy^^dz+B_(y)dz^^dx+B_(z)dx^^dy]:}\begin{aligned}
\mathbf{F}= & E_{x} d x \wedge d t+E_{y} d y \wedge d t+E_{z} d z \wedge d t \\
& +B_{x} d y \wedge d z+B_{y} d z \wedge d x+B_{z} d x \wedge d y
\end{aligned}
Next we define the "dual" 2-form, called the Maxwell:
{:[^(**)F=E_(x)dy^^dz+E_(y)dz^^dx+E_(z)dx^^dy],[+B_(x)dt^^dx+B_(y)dt^^dy+B_(z)dt^^dz.]:}\begin{aligned}
{ }^{*} \mathbf{F}= & E_{x} d y \wedge d z+E_{y} d z \wedge d x+E_{z} d x \wedge d y \\
& +B_{x} d t \wedge d x+B_{y} d t \wedge d y+B_{z} d t \wedge d z .
\end{aligned}
We also define the 4-current, J\mathbf{J}, and its "dual," ^(J){ }^{\mathbf{J}} :
{:[J=(:rho,J_(x),J_(y),J_(z):)],[^(J=)=rho dx^^dy^^dz],[-J_(x)dt^^dy^^dz],[-J_(y)dt^^dz^^dx],[-J_(z)dt^^dx^^dy]:}\begin{aligned}
\mathbf{J}= & \left\langle\rho, J_{x}, J_{y}, J_{z}\right\rangle \\
{ }^{\mathbf{J}=}= & \rho d x \wedge d y \wedge d z \\
& -J_{x} d t \wedge d y \wedge d z \\
& -J_{y} d t \wedge d z \wedge d x \\
& -J_{z} d t \wedge d x \wedge d y
\end{aligned}
8.1. Show that the equation dF=0d \mathbf{F}=0 implies the first two of Maxwell's equations.
8.2. Show that the equation d_(**)F=4pi_(***)d_{*} \mathbf{F}=4 \pi_{\star} J implies the second two of Maxwell's equations.
The differential form version of Maxwell's equation has a huge advantage over the vector formulation: it is coordinate free! A 2form such as F is an operator that "eats" pairs of vectors and "spits out" numbers. The way it acts is completely geometric ... that is, it
can be defined without any reference to the coordinate system ( t,xt, x, y,zy, z ). This is especially poignant when one realizes that Maxwell's equations are laws of nature that should not depend on a man-made construction such as coordinates.
8.2 Foliations and contact structures
Everyone has seen tree rings and layers in sedimentary rock. These are examples of foliations. Intuitively, a foliation is when some region of space has been "filled up" with lower-dimensional surfaces. A full treatment of foliations is a topic for a much larger textbook than this one. Here we will only be discussing foliations of R^(3)\mathrm{R}^{3}.
Let UU be an open subset of R^(3)R^{3}. We say UU has been foliated if there is a family varphi^(t):R_(t)rarr U\varphi^{t}: R_{t} \rightarrow U of parameterizations (where for each tt the domain R_(t)subR^(2)R_{t} \subset R^{2} ) such that every point of UU is in the image of exactly one such parameterization. In other words, the images of the parameterizations varphi^(t)\varphi^{t} are surfaces that fill up UU, and no two overlap.
Suppose pp is a point of UU and UU has been foliated as above. Then there is a unique value of tt such that pp is a point in varphi^(t)(R_(t))\varphi^{t}\left(R_{t}\right). The partial derivatives, (delphi^(t))/(del x)(p)\frac{\partial \phi^{t}}{\partial x}(p) and (delphi^(t))/(del y)(p)\frac{\partial \phi^{t}}{\partial y}(p) are then two vectors that span a plane in T_(p)R^(3)T_{p} \mathrm{R}^{3}. Let's call this plane Pi_(p)\Pi_{p}. In other words, if UU is foliated, then at every point pp of UU we get a plane Pi_(p)\Pi_{p} in T_(p)R^(3)T_{p} \mathrm{R}^{3}.
The family {Pi_(p)}\left\{\Pi_{p}\right\} is an example of a plane field. In general, a plane field is just a choice of a plane in each tangent space which varies smoothly from point to point in R^(3)\mathrm{R}^{3}. We say a plane field is integrable if it consists of the tangent planes to a foliation.
This should remind you a little of first-term calculus. If f:R^(1)rarrR^(1)f: \mathrm{R}^{1} \rightarrow \mathrm{R}^{1} is a differentiable function, then at every point pp on its graph we get a line in T_(p)R^(2)T_{p} R^{2} (see Figure 4.2). If we only know the lines and want the original function, then we integrate.
There is a theorem that says that every line field on R^(2)\mathrm{R}^{2} is integrable. The question we would like to answer in this section is whether or not this is true of plane fields on R^(3)\mathrm{R}^{3}. The first step is to figure out how to specify a plane field in some reasonably nice way. This is where differential forms come in. Suppose {Pi_(p)}\left\{\Pi_{p}\right\} is a plane field. At each point pp, we can define a line in T_(p)R^(3)T_{p} \mathrm{R}^{3} (i.e., a line field) by looking at the set of all vectors that are perpendicular to Pi_(p)\Pi_{p}. We can then define a 1-form omega\omega by projecting vectors onto these lines. So, if V_(p)V_{p} is a vector in Pi_(p)\Pi_{p} then omega(V_(p))=0\omega\left(V_{p}\right)=0. Another way to say this is that the plane Pi_(p)\Pi_{p} is the set of all vectors which yield zero when plugged into omega\omega. In shorthand, we write this set as Ker omega\omega ("Ker" comes from the word "Kernel," a term from linear algebra). So all we are saying is that omega\omega is a 1 -form such that Pi_(p)=Ker omega\Pi_{p}=\operatorname{Ker} \omega. This is very convenient. To specify a plane field, all we have to do now is write down a 1-form!
Example 42. Suppose omega=dx\omega=d x. Then, at each point pp of R^(3)\mathrm{R}^{3}, the vectors of T_(p)R^(3)T_{p} \mathrm{R}^{3} that yield zero when plugged into omega\omega are all those in the dydzd y d z-plane. Hence, Ker omega\omega is the plane field consisting of all of the dydzd y d z-planes (one for every point of ^(3){ }^{3} ). It is obvious that this plane field is integrable; at each point pp we just have the tangent plane to the plane parallel to the yzy z-plane through pp.
In the above example, note that any 1-form that looks like f(x,yf(x, y, z)dxz) d x defines the same plane field, as long as ff is non-zero everywhere. So, knowing something about a plane field (like the assumption that it is integrable) seems like it might not say much about the 1 -form omega\omega, since so many different 1 -forms give the same plane field. Let's investigate this further.
First, let's see if there is anything special about the derivative of a 1-form that looks like omega=f(x,y,z)dx\omega=f(x, y, z) d x. This is easy: d omega=(del f)/(del y)dy^^dx+(del f)/(del z)dz^^dxd \omega=\frac{\partial f}{\partial y} d y \wedge d x+\frac{\partial f}{\partial z} d z \wedge d x. This is nothing special so far. What about combining this with omega\omega ? Let's compute:
omega^^d omega=f(x,y,z)dx^^((del f)/(del y)dy^^dx+(del f)/(del z)dz^^dx)=0\omega \wedge d \omega=f(x, y, z) d x \wedge\left(\frac{\partial f}{\partial y} d y \wedge d x+\frac{\partial f}{\partial z} d z \wedge d x\right)=0
Now that is special! In fact, recall our earlier emphasis on the fact that forms are coordinate free. In other words, any computation one can perform with forms will give the same answer regardless of what coordinates are chosen. The wonderful thing about foliations is that near every point you can always choose coordinates so that your foliation looks like planes parallel to the yzy z-plane. In other words, the above computation is not as special as you might think:
Theorem 2. If Ker omega\omega is an integrable plane field, then omega^^a omega=0\omega \wedge a \omega=0 at every point of R^(3)\mathrm{R}^{3}.
It should be noted that we have only chosen to work in R^(3)\mathrm{R}^{3} for ease of visualization. There are higher-dimensional definitions of foliations and plane fields. In general, if the kernel of a 1-form omega\omega defines an integrable plane field then omega^^aomega^(n)=0\omega \wedge a \omega^{n}=0.
Our search for a plane field that is not integrable (i.e., not the tangent planes to a foliation) has now been reduced to the search for a 1-form omega\omega for which omega^^a omega!=0\omega \wedge a \omega \neq 0 somewhere. There are many such forms. An easy one is xdy+dzx d y+d z. We compute:
(xdy+dz)^^d(xdy+dz)=(xdy+dz)^^(dx^^dy)=dz^^dx^^dy(x d y+d z) \wedge d(x d y+d z)=(x d y+d z) \wedge(d x \wedge d y)=d z \wedge d x \wedge d y
Our answer is quite special. All we needed was a 1-form such that
omega^^d omega!=0\omega \wedge d \omega \neq 0
somewhere. What we found was a 1-form for which omega^^a omega!=0\omega \wedge a \omega \neq 0 everywhere. This means that there is not a single point of R^(3)\mathrm{R}^{3} which has a neighborhood in which the planes given by Ker xdy+dzx d y+d z are tangent to a foliation. Such a plane field is called a contact structure.
At this point you are probably wondering, "What could Ker xdyx d y+dz+d z possibly look like?!" It is not so easy to visualize this, but we have tried to give you some indication in Figure 8.1.1 A good exercise is to stare at this picture long enough to convince yourself that the planes pictured cannot be the tangent planes to a foliation.
We have just seen how we can use differential forms to tell if a plane field is integrable. But one may still wonder if there is more we can say about a 1 -form, assuming its kernel is integrable. Let's go back to the expression omega^^a omega\omega \wedge a \omega. Recall that omega\omega is a 1 -form, which makes a omegaa \omega a 2 -form, and hence omega^^a omega\omega \wedge a \omega a 3 -form.
A 3-form on T_(p)R^(3)T_{p} \mathrm{R}^{3} measures the volume of the parallelepiped spanned by three vectors, multiplied by a constant. For example, if Psi=a^^beta^^gamma\Psi=a \wedge \beta \wedge \gamma is a 3 -form, then the constant it scales volume by is given by the volume of the parallelepiped spanned by the vectors (:a:)\langle a\rangle , (:beta:)\langle\beta\rangle and gamma:)\gamma\rangle (where " (:a:)\langle a\rangle " refers to the vector dual to the 1-form aa introduced in Section 4.3). If it turns out that Psi\Psi is the zero 3 -form, then the vector (:a:)\langle a\rangle must be in the plane spanned by the vectors (:beta:)\langle\beta\rangle and (:gamma:)\langle\gamma\rangle.
Fig. 8.1. The plane field Ker xdy+dz\operatorname{Ker} x d y+d z.
On R^(3)\mathrm{R}^{3} the results of Section 4.3 tell us that a 2 -form such as alpha omega\alpha \omega can always be written as a^^betaa \wedge \beta, for some 1 -forms aa and beta\beta. If omega\omega is a 1 -form with integrable kernel, then we have already seen that omega^^d\omega \wedge domega=omega^^a^^beta=0\omega=\omega \wedge a \wedge \beta=0. But this tells us that (:omega:)\langle\omega\rangle must be in the plane spanned by the vectors (:a:)\langle a\rangle and (:beta:)\langle\beta\rangle. Now we can invoke Lemma 1 of Chapter 4, which says that we can rewrite alpha omega\alpha \omega as omega^^v\omega \wedge v, for some 1form V. (See also Problem 4.27.)
If we start with a foliation and choose a 1 -form omega\omega whose kernel consists of planes tangent to the foliation, then the 1 -form vv that we have just found is in no way canonical. We made a lot of choices to get to V , and different choices will end up with different 1 -forms. But here is the amazing fact: the integral of the 3-form vv^^alpha vv\vee \wedge \alpha \vee does not depend on any of our choices! It is completely determined by the original foliation. Whenever a mathematician runs into a situation like this they usually throw up their hands and say, "Eureka! I've discovered an invariant." The quantity int vv^^a vv\int \vee \wedge a \vee is referred to as the Gobillion-Vey invariant of the foliation. It is a top c of current
research to identify exactly what information this number tells us about the foliation.
Two special cases are worth noting. First, it may turn out that vv^^\vee \wedgeaV=0a \mathrm{~V}=0 everywhere. This tells us that the plane field given by Ker v is integrable, so we get another foliation. The other interesting case is when vv^^a vv\vee \wedge a \vee is nowhere zero. Then we get a contact structure.
8.3 How not to visualize a differential 1-form
There are several contemporary physics texts that attempt to give a visual interpretation of differential forms that seems quite different from the one presented here. As this alternate interpretation is much simpler than anything described in these notes, one may wonder why we have not taken this approach.
Let's look again at the 1 -form dxd x on R^(3)\mathrm{R}^{3}. Given a vector V_(p)V_{p} at a point pp, the value of dx(V_(p))d x\left(V_{p}\right) is just the projection of V_(p)V_{p} onto the dxd x axis in T_(p)R^(3)T_{p} \mathrm{R}^{3}. Now, let CC be some parameterized curve in R^(3)\mathrm{R}^{3} for which the xx-coordinate is always increasing. Then cdxc d x is just the length of the projection of CC onto the xx-axis. To the nearest intege, this is just the number of planes that CC punctures of the form x=nx=n, where nn is an integer. So one way to visualize the form dxd x is to picture these planes.
Fig. 8.2. "Surfaces" of omega\omega ?
This view is very appealing. After all, every 1 -form omega\omega, at every point pp, projects vectors onto some line I_(p)I_{p}. So can we integrate omega\omega along a curve CC (at least to the nearest integer) by counting the number of surfaces punctured by CC whose tangent planes are perpendicular to the lines I_(p)I_{p} (see Figure 8.2)? If you have read the previous section, you might guess that the answer is a categorical NO!
Recall that the planes perpendicular to the lines I_(p)I_{p} are precisely Ker omega\omega. To say that there are surfaces whose tangent planes are perpendicular to the lines I_(p)I_{p} is the same thing as saying that Ker omega\operatorname{Ker} \omega is an integrable plane field. But we have seen in the previous section that there are 1 -forms as simple as xdy+dzx d y+d z whose kernels are nowhere integrable.
Fig. 8.3. The Reeb foliation of the solid torus.
Can we at least use this interpretation for a 1-form whose kernel is integrable? Unfortunately, the answer is still no. Let omega\omega be the 1form on the solid torus whose kernel consists of the planes tangent to the foliation pictured in Figure 8.3. (This is called the Reeb foliation of the solid torus.) The surfaces of this foliation spiral continually outward. So if we try to pick some number of "sample" surfaces, then they will "bunch up" near the boundary torus. This seems to indicate that if we want to integrate omega\omega over any path that cuts through the solid torus, then we should get an infinite answer,
since such a path would intersect our "sample" surfaces an infinite number of times. However, we can certainly find a 1 -form omega\omega for which this is not the case.
We do not want to end this section on such a down note. Although it is not valid in general to visualize a 1 -form as a sample collection of surfaces from a foliation, we can visualize it as a plane field. For example, Figure 8.1 is a pretty good depiction of the 1form xdy+dzx d y+d z. In this picture there are a few evenly spaced elements of its kernel, but this is enough. To get a rough idea of the value of Cxdy+dzC x d y+d z we can just count the number of (transverse) intersections of the planes pictured with CC. So, for example, if CC is a curve whose tangents are always contained in one of these planes (a so-called Legendrian curve), thencx dy+dzd y+d z will be zero. Inspection of the picture reveals that examples of such curves are the lines parallel to the xx-axis.
8.3. Show that if CC is a line parallel to the xx-axis, then Cxdy+dz=C x d y+d z= 0.
9
Manifolds
9.1 Pull-backs
Before moving on to defining forms in more general contexts, we need to introduce one more concept. Let's re-examine Equation 5.3:
The form in the integrand on the right was defined so as to integrate to give the same answer as the form on the left. This is what we would like to generalize. Suppose varphi:R^(n)rarrR^(m)\varphi: R^{n} \rightarrow R^{m} is a parameterization, and omega\omega is a kk-form on R^(m)\mathrm{R}^{m}. We define the pull-back of omega\omega under varphi\varphi to be the form on R^(n)\mathrm{R}^{n} which gives the same integral over any kk-cell, sigma\sigma, as omega\omega does when integrated over varphi(sigma)\varphi(\sigma). Following convention, we denote the pull-back of omega\omega under varphi\varphi as " varphi_(***)omega\varphi_{\star} \omega."
So how do we decide how varphi_(**)omega\varphi_{*} \omega acts on a kk-tuple of vectors in T_(p)R^(n)T_{p} \mathrm{R}^{n} ? The trick is to use varphi\varphi to translate the vectors to a kk-tuple in T varphiT \varphi_((p))R^(m){ }_{(p)} \mathrm{R}^{m}, and then plug them into omega\omega. The matrix D varphiD \varphi, whose columns are the partial derivatives of varphi\varphi, is an n xx mn \times m matrix. This matrix acts on vectors in T_(p)R^(n)T_{p} R^{n}, and returns vectors in T varphi(p)R^(m)T \varphi(p) R^{m}. So, we define (see Figure 9.1):
Example 43. Suppose omega=ydx+zdy+xdz\omega=y d x+z d y+x d z is a 1-form on R^(3)\mathrm{R}^{3}, and varphi(a,b)=(a+b,a-b,ab)\varphi(a, b)=(a+b, a-b, a b) is a map from R^(2)\mathrm{R}^{2} to R^(3)\mathrm{R}^{3}. Then varphi_(***)omega\varphi_{\star} \omega will be a 1-form on R^(2)\mathrm{R}^{2}. To determine which one, we can examine how it acts on the vectors (:1,0:)_((a,b))\langle 1,0\rangle_{(a, b)} and (:0,1:)_((a,b))\langle 0,1\rangle_{(a, b)}.
phi^(**)omega=(a-b+2ab+b^(2))da+(a-b+a^(2))db.\phi^{*} \omega=\left(a-b+2 a b+b^{2}\right) d a+\left(a-b+a^{2}\right) d b .
9.1. If omega=x^(2)dy^^dz+y^(2)dz^^dw\omega=x^{2} d y \wedge d z+y^{2} d z \wedge d w is a 2-form on R^(4)\mathrm{R}^{4}, and varphi(a,b,c)\varphi(a, b, c)=(a,b,c,abc)=(a, b, c, a b c), then what is varphi_(***)omega\varphi_{\star} \omega ?
9.2. If omega\omega is an nn-form on R^(m)\mathrm{R}^{m} and varphi:R^(n)rarrR^(m)\varphi: \mathrm{R}^{n} \rightarrow \mathrm{R}^{m}, then
9.3. If sigma\sigma is a kk-cell in R^(n),varphi:R^(n)rarrR^(m)\mathrm{R}^{n}, \varphi: \mathrm{R}^{n} \rightarrow \mathrm{R}^{m}, and omega\omega is a kk-form on R^(m)\mathrm{R}^{m} then
9.4. If varphi:R^(n)rarrR^(m)\varphi: \mathrm{R}^{n} \rightarrow \mathrm{R}^{m} and omega\omega is a kk-form on R^(m)\mathrm{R}^{m}, then d(varphi^(***)omega)=varphi^(**)(a omegad\left(\varphi^{\star} \omega\right)=\varphi^{*}(a \omega ).
These exercises prepare us for the proof of the generalized Stokes' Theorem (recall that in Chapter 7 we only proved this theorem when
integrating over cubes and their boundaries). Suppose sigma\sigma is an nn-cell in R^(m),varphi:I^(n)subR^(n)rarrR^(m)\mathrm{R}^{m}, \varphi: I^{n} \subset \mathrm{R}^{n} \rightarrow \mathrm{R}^{m} is a parameterization of sigma\sigma and omega\omega is an ( n-n-1)1)-form on R^(m)\mathrm{R}^{m}. Then we can combine Problems 9.3, 9.4, and 7.7 to give us
int_(del sigma)omega=int_(phi(delI^(n)))omega=int_(delI^(n))phi^(**)omega=int_(I^(n))d(phi^(**)omega)=int_(I^(n))phi^(**)(d omega)=int_(phi(I^(n)))d omega=int_(sigma)d omega\int_{\partial \sigma} \omega=\int_{\phi\left(\partial I^{n}\right)} \omega=\int_{\partial I^{n}} \phi^{*} \omega=\int_{I^{n}} d\left(\phi^{*} \omega\right)=\int_{I^{n}} \phi^{*}(d \omega)=\int_{\phi\left(I^{n}\right)} d \omega=\int_{\sigma} d \omega
9.2 Forms on subsets of R^(n)\mathrm{R}^{\boldsymbol{n}}
The goal of this chapter is to slowly work up to defining forms in a more general setting than just on R^(n)\mathrm{R}^{n}. One reason for this is because the generalized Stokes' Theorem actually tells us that forms on R^(n)\mathrm{R}^{n} are not very interesting. For example, let's examine how a 1 -form, omega\omega , on R^(2)R^{2}, for which a omega=0a \omega=0 (i.e., omega\omega is closed), integrates over any 1chain, CC, such that del C=O/\partial C=\varnothing (i.e., CC is closed). It is a basic result of topology that any such 1 -chain bounds a 2-chain, DD. Hence, c omega=c \omega=int_(D)d omega=0\int_{D} d \omega=0 !
Fortunately, there is no reason to restrict ourselves to differential forms which are defined on all of R^(n)\mathrm{R}^{n}. Instead, we can simply consider forms which are defined on subsets, UU, of R^(n)R^{n}. For technical reasons, we will always assume such subsets are open. This is a technical condition which means that for each p in Up \in U, there is an epsilon\epsilon such that
In this case, TU_(p)=TR_(p)^(n)T U_{p}=T R_{p}^{n}. Since a differential nn-form is nothing more than a choice of an nn-form on R^(n)_(p)\mathbb{R}^{n}{ }_{p}, for each pp (with some condition about differentiability), it makes sense to talk about a differential form on UU.
Example 44. is a differential 1 -form on R^(2)-(0,0)\mathrm{R}^{2}-(0,0).
omega_(0)=-(y)/(x^(2)+y^(2))dx+(x)/(x^(2)+y^(2))dy\omega_{0}=-\frac{y}{x^{2}+y^{2}} d x+\frac{x}{x^{2}+y^{2}} d y
9.5. Show that aomega_(0)=0a \omega_{0}=0.
9.6. Let CC be the unit circle, oriented counter-clockwise. Show that CC wo =2pi=2 \pi. Hint: Let omega^(')=-ydx+xdy\omega^{\prime}=-y d x+x d y. Note that on C,omega_(0)=omega^(')C, \omega_{0}=\omega^{\prime}.
If CC is any closed 1 -chain in R^(2)-(0,0)R^{2}-(0,0), then the quantity (1)/(2pi)int_(C" is ")\frac{1}{2 \pi} \int_{C \text { is }} called the winding number of CC, since it computes the number of times CC winds around the origin.
9.7. Let x^(+)x^{+}denote the positive xx-axis in R^(2)-(0,0)\mathrm{R}^{2}-(0,0), and let CC be any closed 1-chain. Suppose V_(p)V_{p} is a basis vector of TC_(p)T C_{p} which agrees with the orientation of CC at pp. A positive (respectively, negative) intersection of CC with x^(+)x^{+}is one where V_(p)V_{p} has a component which points "up" (respectively, "down"). Assume all intersections of CC with x^(+)x^{+}are either positive or negative. Let PP denote the number of positive ones and NN the number of negative ones. Show that (1)/(2pi)int_(C)omega_(0)=P-N\frac{1}{2 \pi} \int_{C} \omega_{0}=P-N.
Hint: Use the generalized Stokes' Theorem.
9.3 Forms on parameterized subsets
Recall that at each point, a differential from is simply an alternating, multi-linear map on a tangent plane. So all we need to define a differential form on a more general space is a well-defined tangent space. One case in which this happens is when we have a parameterized subset of R^(m)\mathrm{R}^{m}. Let varphi:U subR^(n)rarr M subR^(m)\varphi: U \subset \mathrm{R}^{n} \rightarrow M \subset \mathrm{R}^{m} be a (one-toone) parameterization of MM. Then recall that TM_(p)T M_{p} is defined to be the span of the partial derivatives of varphi\varphi at varphi^(-1)(p)\varphi^{-1}(p), and is a nn-dimensional Euclidean space, regardless of the point, pp. Hence, we say the dimension of MM is nn.
A differential kk-form on MM is simply an alternating, multilinear, real-valued function on TM_(p)T M_{p} for each p in Mp \in M, which varies differentiably with pp. In other words, a differential kk-form on MM is a whole family of kk-forms, each one acting on TM_(p)T M_{p}, for different points, pp. It is not so easy to say precisely what we mean when we say the form varies in a differentiable way with pp. Fortunately, we have already introduced the tools necessary to do this. Let's say that omega\omega is a family of kk-forms, defined on TM_(p')T M_{p \prime} for each p in Mp \in M. Then varphi_(**)omega\varphi_{*} \omega is a family of kk-forms, defined on TR^(n)varphi_(-1(p))\operatorname{TR}^{n} \varphi_{-1(p)}, for each p in Mp \in M. We say that omega\omega is a differentiable kk-form on MM, if varphi_(***)omega\varphi_{\star} \omega is a differentiable family on UU.
This definition illustrates an important technique which is often used when dealing with differential forms on manifolds. Rather than working in MM directly, we use the map varphi_(**)\varphi_{*} to translate problems about forms on MM into problems about forms on UU. These are nice because we already know how to work with forms which are defined
on open subsets of R^(n)R^{n}. We will have much more to say about this later.
Example 45. The infinitely long cylinder, LL, of radius one, centered along the zz - axis, is given by the parameterization, phi(a,b)=((a)/(sqrt(a^(2)+b^(2))),(b)/(sqrt(a^(2)+b^(2))),ln sqrt(a^(2)+b^(2)))\phi(a, b)=\left(\frac{a}{\sqrt{a^{2}+b^{2}}}, \frac{b}{\sqrt{a^{2}+b^{2}}}, \ln \sqrt{a^{2}+b^{2}}\right), whose domain is R^(2)-(0\mathrm{R}^{2}-(0, 0 ). We can use varphi_(***)\varphi_{\star} to solve any problem about forms on LL, by translating it back to a problem about forms on UU.
9.8. Consider the 1 -form, T^(')=-ydx+xdy\mathrm{T}^{\prime}=-y d x+x d y, on R^(3)\mathrm{R}^{3}. In particular, this form acts on vectors in TL_(p)T L_{p}, where LL is the cylinder of the previous example, and pp is any point in LL. Let TT be the restriction of T^(')T^{\prime} to vectors in TL_(p)T L_{p}. So, T is a 1 -form on LL. Compute varphi_(***)T\varphi_{\star} \mathrm{T}. What does this tell you that T measures?
If omega\omega is a kk-form on MM, then what do we mean by d omegad \omega ? Whatever the definition, we clearly want avarphi_(***)omega=varphi_(***)a omegaa \varphi_{\star} \omega=\varphi_{\star} a \omega. So why do we not use this to define a omegaa \omega ? After all, we know what avarphi_(***)omegaa \varphi_{\star} \omega is, since varphi_(***)omega\varphi_{\star} \omega is a form on R^(n)\mathrm{R}^{n}. Recall that Dvarphi_(p)D \varphi_{p} is a map from R^(n)_(p)\mathrm{R}^{n}{ }_{p} to R^(m)_(p)\mathrm{R}^{m}{ }_{p}. However, if we restrict the range to TM_(p)T M_{p}, then Dvarphi_(p)D \varphi_{p} is one-to-one, so it makes sense to refer to Dvarphi^(-1)D \varphi^{-1}. We now define
d omega(V_(p)^(1),dots,V_(p)^(k+1))=dphi^(**)omega(Dphi_(p)^(-1)(V_(p)^(1)),dots,Dphi_(p)^(-1)(V_(p)^(k+1))).d \omega\left(V_{p}^{1}, \ldots, V_{p}^{k+1}\right)=d \phi^{*} \omega\left(D \phi_{p}^{-1}\left(V_{p}^{1}\right), \ldots, D \phi_{p}^{-1}\left(V_{p}^{k+1}\right)\right) .
9.9. If T^(')T^{\prime} and TT are the 1-forms on R^(3)R^{3} and LL, respectively, defined in the previous section, compute dT^(')d T^{\prime} and aa.
9.4 Forms on quotients of R^(17)\mathbf{R}^{\boldsymbol{1 7}} (optional)
This section requires some knowledge of topology and algebra. It is not essential for the flow of the text.
While we are on the subject of differential forms on subsets of R^(n)\mathrm{R}^{n}, there is a very common construction of a topological space for which it is very easy to define what we mean by a differential form. Let's look again at the cylinder, LL, of the previous section. One way to construct LL is to start with the plane, R^(2)\mathrm{R}^{2}, and "roll it up." More technically, we can consider the map, mu(theta,z)=(cos theta,sin theta,z)\mu(\theta, z)=(\cos \theta, \sin \theta, z). In general, this is a many-to-one map, so it is not a parameterization, in the strict sense. To remedy this, one might try and restrict the domain of mu\mu to {(theta,z)inR^(2)∣0 <= theta < 2pi}\left\{(\theta, z) \in R^{2} \mid 0 \leq \theta<2 \pi\right\}, however this set is not open.
Note that for each point, (theta,z)inR^(2),D mu(\theta, z) \in R^{2}, D \mu is a one-to-one map from TR^(2)theta_(,z)T R^{2} \theta_{, z} to TL_(mu)(theta_(z))T L_{\mu}\left(\theta_{z}\right). This is all we need in order for mu_(**)T\mu_{*} T to make sense, where T is the form on LL defined in the previous section.
9.10. Show that mu_(**)T=O/ theta\mu_{*} \mathrm{~T}=\varnothing \theta.
In this case, we say that mu\mu is a covering map, R^(2)\mathrm{R}^{2} is a cover of LL, and d thetad \theta is the lift of T to R^(2)\mathrm{R}^{2}.
9.11. Suppose omega_(0)\omega_{0} is the 1 -form on R^(2)\mathrm{R}^{2} which we used to define the winding number. Let mu(r,theta)=(r cos theta,r sin theta)\mu(r, \theta)=(r \cos \theta, r \sin \theta). Let U={(r,theta)∣r >U=\{(r, \theta) \mid r>0}0\}. Then mu:U rarr{R^(2)-(0,0)}\mu: U \rightarrow\left\{R^{2}-(0,0)\right\} is a covering map. Hence, there is a one-to-one correspondence between a quotient of UU and R^(2)-(0,0)R^{2}-(0,0). Compute the lift of omega_(0)\omega_{0} to UU.
Let's go back to the cylinder, L. Another way to look at things is to ask: How can we recover LL from the theta z\theta z-plane? The answer is to
view LL as a quotient space. Let's put an equivalence relation, RR, on the points of R^(2)\mathrm{R}^{2} : (theta_(1),z_(1))∼(theta_(2),z_(2))\left(\theta_{1}, z_{1}\right) \sim\left(\theta_{2}, z_{2}\right) if and only if z_(1)=z_(2)z_{1}=z_{2}, and theta_(1)-theta_(2)\theta_{1}-\theta_{2}=2n pi=2 n \pi, for some n in Zn \in Z. We will denote the quotient of R^(2)R^{2} under this relation as R^(2)//R\mathrm{R}^{2} / R. mu\mu now induces a one-to-one map, mu\mu, from R^(2)//R\mathrm{R}^{2} / R onto LL. Hence, these two spaces are homeomorphic.
Let's suppose now that we have a form on UU, an open subset of R^(n)\mathrm{R}^{n}, and we would like to know when it descends to a form on a quotient of UU. Clearly, if we begin with the lift of a form, then it will descend. Let's try and see why. In general, if mu:U subR^(n)rarr M subR^(m)\mu: U \subset R^{n} \rightarrow M \subset R^{m} is a many-to-one map, differentiable at each point of UU, then the sets, {mu^(-1)(p)}\left\{\mu^{-1}(p)\right\}, partition UU. Hence, we can form the quotient space, U//mu^(-1)U / \mu^{-1}, under this partition. For each x inmu^(-1)(p),Dmu_(x)x \in \mu^{-1}(p), D \mu_{x} is a one-to-one map from TU_(x)T U_{x} to TM_(p')T M_{p \prime} and hence, Dmu^(-1)D \mu^{-1} is well-defined. If xx and yy are both in mu^(-1)(p)\mu^{-1}(p), then Dmu^(-1)y^(=)Dmu_(x)D \mu^{-1} y^{=} D \mu_{x} is a one-to-one map from TU_(x)T U_{x} to TU_(y)T U_{y}. We will denote this map as Dmu_(xy)D \mu_{x y}. We say a kk-form, omega\omega, on R^(n)\mathrm{R}^{n} descends to a kk-form on U//mu^(-1)U / \mu^{-1} if and only if omega(V^(1)_(x),dots,V_(x)_(x))=omega\omega\left(V^{1}{ }_{x}, \ldots, V_{x}{ }_{x}\right)=\omega(Dmu_(xy)(V^(1)),dots,Dmu_(xy)(V^(1)))\left(D \mu_{x y}\left(V^{1}\right), \ldots, D \mu_{x y}\left(V^{1}\right)\right), for all x,y in Ux, y \in U such that mu(x)=mu(y)\mu(x)=\mu(y).
9.12. If T is a differential kk-form on M_(", then ")mu_(***)TM_{\text {, then }} \mu_{\star} \mathrm{T} (the lift of T ) is a differential kk-form on UU which descends to a differential kk-form on U//mu^(-1)U / \mu^{-1}.
Now suppose that we have a kk-form, omega\omega, on UU which descends to a kk-form on U//mu^(-1)U / \mu^{-1}, where mu:U subR^(n)rarr M subR^(m)\mu: U \subset \mathrm{R}^{n} \rightarrow M \subset \mathrm{R}^{m} is a covering map. How can we get a kk-form on MM ? As we have already remarked, mu\mu : U//mu^(-1)rarr MU / \mu^{-1} \rightarrow M is a one-to-one map. Hence, we can use it to push forward the form, omega\omega. In other words, we can define a kk-form on MM as follows: Given kk vectors in TM_(p)T M_{p}, we first choose a point, x inx \inmu^(-1)(p)\mu^{-1}(p). We then define
mu_(**)omega(V_(p)^(1),dots,V_(p)^(k))= tilde(omega)(Dmu_(x)^(-1)(V_(p)^(1)),dots,Dmu_(x)^(-1)(V_(p)^(k))).\mu_{*} \omega\left(V_{p}^{1}, \ldots, V_{p}^{k}\right)=\tilde{\omega}\left(D \mu_{x}^{-1}\left(V_{p}^{1}\right), \ldots, D \mu_{x}^{-1}\left(V_{p}^{k}\right)\right) .
It follows from the fact that omega\omega descends to a form on U//mu^(-1)U / \mu^{-1} that it does not matter which point, xx, we choose in mu^(-1)(p)\mu^{-1}(p). Note that although mu\mu is not one-to-one, Dmu_(x)D \mu_{x} is, so Dmu^(-1)_(x)D \mu^{-1}{ }_{\mathrm{x}} makes sense.
If we begin with a form on UU, there is a slightly more general construction of a form on a quotient of UU, which does not require the use of a covering map. Let Gamma\Gamma be a group of transformations of UU . We say Gamma\Gamma acts discretely if for each p in Up \in U, there exists an ∈>0\in>0 such that N_(in)(p)N_{\in}(p) does not contain gamma(p)\gamma(p), for any non-identity element, gamma in Gamma\gamma \in \Gamma. If Gamma\Gamma acts discretely, then we can form the quotient of UU by Gamma\Gamma, denoted U GammaU \Gamma, as follows: p∼qp \sim q if there exists gamma in Gamma\gamma \in \Gamma such that y(p)=qy(p)=q. (The fact that Gamma\Gamma acts discretely is what guarantees a "nice" topology on U GammaU \Gamma.)
Now, suppose omega\omega is a kk-form on UU. We say omega\omega descends to a kk form, omega\omega, on U//GammaU / \Gamma, if and only if omega(V^(1)_(p),dots,V_(p)^(k))=omega(D_(gamma)(V_(p)^(1)),dots:}\omega\left(V^{1}{ }_{p}, \ldots, V_{p}^{k}\right)=\omega\left(D_{\gamma}\left(V_{p}^{1}\right), \ldots\right., {: Dy(V^(1)_(p)))\left.\operatorname{Dy}\left(V^{1}{ }_{p}\right)\right), for all gamma in Gamma\gamma \in \Gamma.
Now that we have decided what a form on a quotient of UU is, we still have to define nn-chains, and what we mean by integration of nn forms over nn-chains. We say an nn-chain, tilde(C)sub U\tilde{C} \subset U, descends to an nn chain, C sub U//GammaC \subset U / \Gamma, if y( tilde(C))= tilde(C)y(\tilde{C})=\tilde{C}, for all gamma in Gamma\gamma \in \Gamma. The nn-chains of U//GammaU / \Gamma are simply those which are descendants of nn-chains in UU.
Integration is a little more subtle. For this we need the concept of a fundamental domain for Gamma\Gamma. This is nothing more than a closed subset of UU, whose interior does not contain two equivalent points. Furthermore, for each equivalence class, there is at least one representative in a fundamental domain. Here is one way to construct a fundamental domain: First, choose a point, p in Up \in U. Now, let D={q in U∣d(p,q) <= d(gamma(p),q)D=\{q \in U \mid d(p, q) \leq d(\gamma(p), q), for all gamma in Gamma}\gamma \in \Gamma\}.
Now, let tilde(C)\tilde{C} be an nn-chain on UU which descends to an nn-chain, CC, on U//GammaU / \Gamma, and let omega\omega be an nn-form that descends to an nn-form, omega\omega. Let DD be a fundamental domain for Gamma\Gamma in UU. Then we define
Technical note: In general, this definition is invariant of which point was chosen in the construction of the fundamental domain, DD. However, a VERY unlucky choice will result in tilde(C)nn D sub del D\tilde{C} \cap D \subset \partial D, which could give a different answer for the above integral. Fortunately, it can be shown that the set of such "unlucky" points has measure zero. That is, if we were to choose the point at random, then the odds of picking an "unlucky" point are 0%0 \%. Very unlucky indeed!
Example 46. Suppose Gamma\Gamma is the group of transformations of the plane generated by (x,y)rarr(x+1,y)(x, y) \rightarrow(x+1, y), and (x,y)rarr(x,y+1)(x, y) \rightarrow(x, y+1). The space R^(2)//Gamma\mathrm{R}^{2} / \Gamma is often denoted T^(2)T^{2}, and referred to as a torus. Topologists often visualize the torus as the surface of a donut. A fundamental domain for Gamma\Gamma is the unit square in R^(2)\mathrm{R}^{2}. The 1 -form, dxd x, on R^(2)\mathrm{R}^{2} descends to a 1 -form on T^(2)T^{2}. Integration of this form over a closed 1chain, C subT^(2)C \subset T^{2}, counts the number of times CC wraps around the "hole" of the donut.
9.5 Defining manifolds
As we have already remarked, a differential nn-form on R^(m)\mathrm{R}^{m} is just an nn-form on T_(p)R^(m)T_{p} \mathrm{R}^{m}, for each point p inR^(m)p \in \mathrm{R}^{m}, along with some condition about how the form varies in a differentiable way as pp varies. All we need to define a form on a space other than R^(m)\mathrm{R}^{m} is some notion of a tangent space at every point. We call such a space a manifold. In addition, we insist that at each point of a manifold the tangent space has the same dimension, nn, which we then say is the dimension of the manifold.
How do we guarantee that a given subset of R^(m)\mathrm{R}^{m} is a manifold? Recall that we defined the tangent space to be the span of some partial derivatives of a parameterization. However, insisting that the whole manifold is capable of being parameterized is very restrictive. Instead, we only insist that every point of a manifold lies in a subset that can be parameterized. Hence, if MM is an nn-manifold in R^(m)\mathrm{R}^{m} then there is a set of open subsets, {U_(i)}subR^(n)\left\{U_{i}\right\} \subset \mathrm{R}^{n}, and a set of differentiable maps, {varphi_(i):U_(i)rarr M}\left\{\varphi_{i}: U_{i} \rightarrow M\right\}, such that {varphi_(i)(U_(i))}\left\{\varphi_{i}\left(U_{i}\right)\right\} is a cover of MM. (That is, for each point, p in Mp \in M, there is an ii, and a point, q inU_(i)q \in U_{i}, such that varphi_(i)\varphi_{i} ( qq ) =p=p.)
Example 47. S^(1)S^{1}, the unit circle in R^(2)R^{2}, is a 1-manifold. Let U_(i)=(-11)U_{i}=(-11), for i=1,2,3,4,varphi_(1)(t)=(t,sqrt()1-t^(2),varphi_(2)(t)=(t,-sqrt()1-t^(2)),varphi_(3)(t)=(sqrt()1-:}i=1,2,3,4, \varphi_{1}(t)=\left(t, \sqrt{ } 1-\mathrm{t}^{2}, \varphi_{2}(t)=\left(t,-\sqrt{ } 1-\mathrm{t}^{2}\right), \varphi_{3}(t)=(\sqrt{ } 1-\right.{:t^(2),t)\left.\mathrm{t}^{2}, t\right) and varphi_(4)(t)=(-sqrt(1-t^(2)),t)\varphi_{4}(t)=\left(-\sqrt{1-\mathrm{t}^{2}}, \mathrm{t}\right). Then {varphi_(i)(U_(i))}\left\{\varphi_{i}\left(U_{i}\right)\right\} is certainly a cover of S^(1)S^{1} with the desired properties.
9.13. Show that S^(2)S^{2}, the unit sphere in R^(3)\mathrm{R}^{3}, is a 2-manifold.
9.6 Differential forms on manifolds
Basically, the definition of a differential nn-form on an mm-manifold is the same as the definition of an nn-form on a subset of R^(m)\mathrm{R}^{m} which was given by a single parameterization. First and foremost it is just an nn form on T_(p)MT_{p} M, for each p in Mp \in M.
Let's say MM is an mm-manifold. Then we know there is a set of open sets, {U_(i)}subR^(m)\left\{U_{i}\right\} \subset R^{m}, and a set of differentiable maps, {varphi_(i):U_(i)rarr M}\left\{\varphi_{i}: U_{i} \rightarrow M\right\}, such that {Phi_(i)(U_(i))}\left\{\Phi_{i}\left(U_{i}\right)\right\} covers MM. Now, let's say that omega\omega is a family of nn-forms, defined on T_(p)MT_{p} M, for each p in Mp \in M. Then we say that the family, omega\omega, is a differentiable nn-form on MM if varphi,omega\varphi, \omega is a differentiable nn-form on U_(i,)U_{i,}, for each ii.
Example 48. In the previous section, we saw how S^(1)S^{1}, the unit circle in R^(2)\mathrm{R}^{2}, is a 1-manifold. If (x,y)(x, y) is a point of S^(1)S^{1}, then TS^(1)(x,y)T S^{1}(x, y) is given by the equation dy=-x//ydxd y=-x / y d x, in RR^(2)(x,y)\operatorname{RR}^{2}(x, y), as long as y!=0y \neq 0. If y=0y=0, then TS^(1)(x,y)T S^{1}(x, y) is given by dx=0d x=0. We define a 1 -form on S^(1),omega=-yS^{1}, \omega=-ydx+xdyd x+x d y. (Actually, omega\omega is a 1 -form on all of R^(2)\mathrm{R}^{2}. To get a 1 -form on just S^(1)S^{1}, we restrict the domain of omega\omega to the tangent lines to S^(1)S^{1}.) To check that this is really a differential form, we must compute all pullbacks:
{:[phi_(1)^(**)omega=(-1)/(sqrt(1-t^(2)))dt","phi_(2)^(**)omega=(1)/(sqrt(1-t^(2)))dt],[phi_(3)^(**)omega=(1)/(sqrt(1-t^(2)))dt","phi_(4)^(**)omega=(-1)/(sqrt(1-t^(2)))dt.]:}\begin{aligned}
& \phi_{1}^{*} \omega=\frac{-1}{\sqrt{1-t^{2}}} d t, \phi_{2}^{*} \omega=\frac{1}{\sqrt{1-t^{2}}} d t \\
& \phi_{3}^{*} \omega=\frac{1}{\sqrt{1-t^{2}}} d t, \phi_{4}^{*} \omega=\frac{-1}{\sqrt{1-t^{2}}} d t .
\end{aligned}
Since all of these are differentiable on U_(i)=(-1,1)U_{i}=(-1,1), we can say that omega\omega is a differential form on S^(1)S^{1}.
We now move on to integration of nn-chains on manifolds. The definition of an nn-chain is no different than before; it is just a formal linear combination of nn-cells in MM. Let's suppose that CC is an nn-chain in MM, and omega\omega is an nn-form. Then how do we definec omega\omega ? If CC lies entirely in varphi_(i)(U_(i))\varphi_{i}\left(U_{i}\right), for some ii, then we could define the value of this
integral to be the value of phi_(i)^(-1)(C)\phi_{i}^{-1}(C). But it may be that part of CC lies in both varphi_(i)(U_(i))\varphi_{i}\left(U_{i}\right) and varphi_(j)(U_(j))\varphi_{j}\left(U_{j}\right). If we definec omega\omega to be the sum of the two integrals we get when we pull-back omega\omega under varphi_(i)\varphi_{i} and varphi_(j)\varphi_{j}, then we end up "double counting" the integral of omega\omega on C nnvarphi_(i)(U_(i))nnvarphi_(j)(U_(j))C \cap \varphi_{i}\left(U_{i}\right) \cap \varphi_{j}\left(U_{j}\right). Somehow, as we move from varphi_(i)(U_(i))\varphi_{i}\left(U_{i}\right) into varphi_(j)(U_(j))\varphi_{j}\left(U_{j}\right), we want the effect of the pull-back of omega\omega under varphi_(i)\varphi_{i} to "fade out," and the effect of the pull-back under varphi_(j)\varphi_{j} to "fade in." This is accomplished by a partition of unity.
The technical definition of a partition of unity subordinate to the cover, {varphi_(i)(U_(i))}\left\{\varphi_{i}\left(U_{i}\right)\right\} is a set of differentiable functions, f_(i):M rarr[0,1]f_{i}: M \rightarrow[0,1], such that f_(i)(p)=0f_{i}(p)=0 if p!invarphi_(i)(U_(i))p \notin \varphi_{i}\left(U_{i}\right), and sum_(i)f_(i)(p)=1\sum_{i} f_{i}(p)=1, for all p in Mp \in M. We refer the reader to any book on differential topo gy for a proof of the existence of partitions of unity.
We are now ready to give the full definition of the integral of an nn form on an nn-chain in an mm-manifold.
Example 49. Let MM be the manifold which is the interval (1,10)sub R(1,10) \subset R. Let U_(i)=(i,i+2)U_{i}=(i, i+2), for i=1,dots,8i=1, \ldots, 8. Let varphi_(i):U_(i)rarr M\varphi_{i}: U_{i} \rightarrow M be the identity map. Let {f_(i)}\left\{f_{i}\right\} be a partition of unity, subordinate to the cover, {varphi\{\varphi{:_(i)(U_(i))}\left.{ }_{i}\left(U_{i}\right)\right\}. Let omega\omega be a 1-form on M. Finally, let CC be the 1 -chain which consists of the single 1-cell, [2,8][2,8]. Then we have
as one would expect!
Example 50. Let S^(1),U_(ij)varphi_(i)S^{1}, U_{i j} \varphi_{i} and omega\omega be defined as in Examples 47 and 48. A partition of unity subordinate to the cover {varphi_(i)(U_(i))}\left\{\varphi_{i}\left(U_{i}\right)\right\} is as follows:
(Check this!) Let mu:[0,pi]rarrS^(1)\mu:[0, \pi] \rightarrow S^{1} be defined by mu(theta)=(cos theta,sin theta\mu(\theta)=(\cos \theta, \sin \theta ). Then the image of mu\mu is a 1 -cell, sigma\sigma, in S^(1)S^{1}. Let's integrate omega\omega over sigma\sigma
{:[int_(sigma)omega-=sum_(i=1)^(4)int_(phi_(i)^(-1)(sigma))phi_(i)^(**)(f_(i)omega)],[=int_(-(-1,1))-sqrt(1-t^(2))dt+0+int_([0,1))sqrt(1-t^(2))dt+int_(-[0,1))-sqrt(1-t^(2))dt],[=int_(-1)^(1)sqrt(1-t^(2))dt+2int_(0)^(1)sqrt(1-t^(2))dt],[=pi]:}\begin{aligned}
\int_{\sigma} \omega & \equiv \sum_{i=1}^{4} \int_{\phi_{i}^{-1}(\sigma)} \phi_{i}^{*}\left(f_{i} \omega\right) \\
& =\int_{-(-1,1)}-\sqrt{1-t^{2}} d t+0+\int_{[0,1)} \sqrt{1-t^{2}} d t+\int_{-[0,1)}-\sqrt{1-t^{2}} d t \\
& =\int_{-1}^{1} \sqrt{1-t^{2}} d t+2 \int_{0}^{1} \sqrt{1-t^{2}} d t \\
& =\pi
\end{aligned}
CAUTION: Beware of orientations!
9.7 Application: DeRham cohomology
One of the predominant uses of differential forms is to give global information about manifolds. Consider the space R^(2)-(0,0)R^{2}-(0,0), as in Example 44. Near every point of this space we can find an open set which is identical to an open set around a point of R^(2)R^{2}. This means that all of the local information in R^(2)-(0,0)R^{2}-(0,0) is the same as the local information in R^(2)R^{2}. The fact that the origin is missing is a global property.
For the purposes of detecting global properties, certain forms are interesting, and certain forms are completely uninteresting. We will spend some time discussing both. The interesting forms are the ones whose derivative is zero. Such forms are said to be closed. An example of a closed 1 -form was omega_(0)\omega_{0}, from Example 44 of the previous chapter. For now, let's just focus on closed 1 -forms so that you can keep this example in mind.
Let's look at what happens when we integrate a closed 1-form omega_(0)\omega_{0} over a 1 -chain CC such that del C=0\partial C=0 (i.e., CC is a closed 1 -chain). If CC bounds a disk DD then Stokes' theorem says
int_(C)omega_(0)=int_(D)domega_(0)=int_(D)0=0.\int_{C} \omega_{0}=\int_{D} d \omega_{0}=\int_{D} 0=0 .
In a sufficiently small region of every manifold, every closed 1-chain bounds a disk. So integrating closed 1 -forms on "small" 1 -chains gives us no information. In other words, closed 1 -forms give no local information.
Suppose now that we have a closed 1-form omega_(0)\omega_{0} and a closed 1chain CC such that int_(C)omega_(0)!=0\int_{C} \omega_{0} \neq 0. Then we know CC does not bound a disk. The fact that there exists such a 1 -chain is global information. This is why we say that the closed forms are the ones that are interesting, from the point of view of detecting only global information.
Now let's suppose that we have a 1 -form omega_(1)\omega_{1}, that is the derivative of a 0 -form ff (i.e., omega_(1)=df\omega_{1}=d f ). We say such a form is exact. Again, let CC be a closed 1 -chain. Let's pick two points, pp and qq, on CC. Then C=C=C_(1)+C_(2)C_{1}+C_{2}, where C_(1)C_{1} goes from pp to qq and C_(2)C_{2} goes from qq back to pp. Now let's do a quick computation:
{:[int_(C)omega_(1)=int_(C_(1)+C_(2))omega_(1)],[=int_(C_(1))omega_(1)+int_(C_(2))omega_(1)],[=int_(C_(1))df+int_(C_(2))df],[=int_(p-q)f+int_(q-p)f],[=0.]:}\begin{aligned}
\int_{C} \omega_{1} & =\int_{C_{1}+C_{2}} \omega_{1} \\
& =\int_{C_{1}} \omega_{1}+\int_{C_{2}} \omega_{1} \\
& =\int_{C_{1}} d f+\int_{C_{2}} d f \\
& =\int_{p-q} f+\int_{q-p} f \\
& =0 .
\end{aligned}
So integrating an exact form over a closed 1-chain always gives zero. This is why we say the exact forms are completely uninteresting. Unfortunately, in Problem 6.9 we learned that every exact form is also closed. This is a problem, since this would say that all of the completely uninteresting forms are also interesting! To remedy this we define an equivalence relation.
We pause here for a moment to explain what this means. An equivalence relation is just a way of taking one set and creating a new set by declaring certain objects in the original set to be "the same." This is the idea behind telling time. To construct the clock numbers, start with the integers and declare two to be "the same" if they differ by a multiple of 12 . So 10+3=1310+3=13, but 13 is the same
as one, so if it's now 10 o'clock then in three hours it will one o'clock.
We play the same trick for differential forms. We will restrict ourselves to the closed forms, but we will consider two of them to be "the same" if their difference is an exact form. The set which we end up with is called the cohomology of the manifold in question. For example, if we start with the closed 1 -forms, then, after our equivalence relation, we end up with the set which we will call H^(1)H^{1}, or the first cohomology (see Figure 9.2).
Fig. 9.2. Defining H^(n)H^{n}.
Note that the difference between an exact form and the form which always returns the number zero is an exact form. Hence, every exact form is equivalent to 0 in H^(n)H^{n}, as in the figure.
For each nn the set H^(n)H^{n} contains a lot of information about the manifold in question. For example, if H^(1)~=R^(1)H^{1} \cong R^{1} (as it turns out is the case for R^(2)-(0,0)R^{2}-(0,0) ), then this tells us that the manifold has one "hole" in it. Studying manifolds via cohomology is one topic of a field of mathematics called Algebraic Topology. For a complete treatment of this subject, see [BT95].
A
Non-linear forms
A. 1 Surface area
Now that we have developed some proficiency with differential forms, let's see what else we can integrate. A basic assumption that we used to come up with the definition of an nn-form was the fact that at every point it is a linear function which "eats" nn vectors and returns a number. But what about the non-linear functions?
Let's go all the way back to Section 3.5. There we decided that the integral of a function ff over a surface RR in R^(3)\mathrm{R}^{3} should look something like:
{:(A.1)int_(R)f(phi(r","theta))Area[(del phi)/(del r)(r,theta),(del phi)/(del theta)(r,theta)]drd theta". ":}\begin{equation*}
\int_{R} f(\phi(r, \theta)) \operatorname{Area}\left[\frac{\partial \phi}{\partial r}(r, \theta), \frac{\partial \phi}{\partial \theta}(r, \theta)\right] d r d \theta \text {. } \tag{A.1}
\end{equation*}
At the heart of the integrand is the Area function, which takes two vectors and returns the area of the parallelogram that it spans. The 2-form dx^^dyd x \wedge d y does this for two vectors in T_(p)R^(2)T_{p} \mathrm{R}^{2}. In T_(p)R^(3)T_{p} \mathrm{R}^{3} the right function is the following:
" Area "(V_(p)^(1),V_(p)^(2))=sqrt((dy^^dz)^(2)+(dx^^dz)^(2)+(dx^^dy)^(2)).\text { Area }\left(V_{p}^{1}, V_{p}^{2}\right)=\sqrt{(d y \wedge d z)^{2}+(d x \wedge d z)^{2}+(d x \wedge d y)^{2}} .
(The reader may recogni e this as the magnitude of the cross product between V_(p)^(1)V_{p}^{1} and V_(p)^(2)V_{p}^{2}.) This is clearly non-linear!
Example 51. The area of the parallelogram spanned by (:1,1,0:)\langle 1,1,0\rangle and (:1,2,3:)\langle 1,2,3\rangle can be computed as follows:
The thing that makes (linear) differential forms so useful is the generalized Stokes' Theorem. We do not have anything like this for non-linear forms, but that is not to say that they do not have their uses. For example, there is no differential 2-form on R^(3)\mathrm{R}^{3} that one can integrate over arbitrary surfaces to find their surface area. For that we would need to compute the following:
Area(R)=int_(S)sqrt((dy^^dz)^(2)+(dx^^dz)^(2)+(dx^^dy)^(2))\operatorname{Area}(R)=\int_{S} \sqrt{(d y \wedge d z)^{2}+(d x \wedge d z)^{2}+(d x \wedge d y)^{2}}
For relatively simple surfaces, this integrand can be evaluated by hand. Integrals such as this play a particularly important role in certain applied problems. For example, if one were to dip a loop of bent wire into a soap film, the resulting surface would be the one of minimal area. Before one can even begin to figure out what surface this is for a given piece of wire, one must be able to know how to compute the area of an arbitrary surface, as above.
Example 52. We compute the surface area of a sphere of radius rr in R^(3)R^{3}. A parameterization is given by where 0 <= theta <= 2pi0 \leq \theta \leq 2 \pi and 0 <= varphi <= pi0 \leq \varphi \leq \pi. Now we compute:
Phi(theta,phi)=(r sin phi cos theta,r sin phi sin theta,r cos phi)\Phi(\theta, \phi)=(r \sin \phi \cos \theta, r \sin \phi \sin \theta, r \cos \phi)
{:[" Area "((del phi)/(del theta),(del phi)/(del phi))],[=" Area "((:-r sin phi sin theta","r sin phi cos theta","0:)","(:r cos phi cos theta","r cos phi sin theta","-r sin phi:))],[=sqrt((-r^(2)sin^(2)phi cos theta)^(2)+(r^(2)sin^(2)phi sin theta)^(2)+(-r^(2)sin phi cos phi)^(2))],[=rsqrt(sin^(4)phi+sin^(2)phicos^(2)phi)],[=r sin phi]:}\begin{aligned}
& \text { Area }\left(\frac{\partial \phi}{\partial \theta}, \frac{\partial \phi}{\partial \phi}\right) \\
& =\text { Area }(\langle-r \sin \phi \sin \theta, r \sin \phi \cos \theta, 0\rangle,\langle r \cos \phi \cos \theta, r \cos \phi \sin \theta,-r \sin \phi\rangle) \\
& =\sqrt{\left(-r^{2} \sin ^{2} \phi \cos \theta\right)^{2}+\left(r^{2} \sin ^{2} \phi \sin \theta\right)^{2}+\left(-r^{2} \sin \phi \cos \phi\right)^{2}} \\
& =r \sqrt{\sin ^{4} \phi+\sin ^{2} \phi \cos ^{2} \phi} \\
& =r \sin \phi
\end{aligned}
And so the desired area is given by
{:[int_(S)Area((del Phi)/(del theta),(del Phi)/(del phi))d theta d phi],[=int_(0)^(pi)int_(0)^(2pi)r sin phi d theta d phi],[=4pi r]:}\begin{aligned}
& \int_{S} \operatorname{Area}\left(\frac{\partial \Phi}{\partial \theta}, \frac{\partial \Phi}{\partial \phi}\right) d \theta d \phi \\
= & \int_{0}^{\pi} \int_{0}^{2 \pi} r \sin \phi d \theta d \phi \\
= & 4 \pi r
\end{aligned}
A.1. Compute the surface area of a sphere of radius rr in R^(3)\mathrm{R}^{3} using the parameterizations for the top and bottom halves, where 0 <= rho <=0 \leq \rho \leqra0 <= theta <= 2pir a 0 \leq \theta \leq 2 \pi.
Phi(rho,theta)=(rho cos theta,rho sin theta,+-sqrt(r^(2)-rho^(2)))\Phi(\rho, \theta)=\left(\rho \cos \theta, \rho \sin \theta, \pm \sqrt{r^{2}-\rho^{2}}\right)
Let's now go back to Equation A.1. Classically, this is called a surface integral. It might be a little clearer how to compute such an integral if we write it as follows:
int_(R)f(x,y,z)dS=int_(R)f(x,y,z)sqrt((dy^^dz)^(2)+(dx^^dz)^(2)+(dx^^dy)^(2))\int_{R} f(x, y, z) d S=\int_{R} f(x, y, z) \sqrt{(d y \wedge d z)^{2}+(d x \wedge d z)^{2}+(d x \wedge d y)^{2}}
A. 2 Arc length
Lengths are very similar to areas. In calculus you learn that if you have a curve CC in the plane, for example, parameterized by the function varphi(t)=(x(t),y(t))\varphi(t)=(x(t), y(t)), where a <= t <= ba \leq t \leq b, then its arc length is given by
Length(C)=int_(a)^(b)sqrt(((dx)/(dt))^(2)+((dy)/(dt))^(2))dt\operatorname{Length}(C)=\int_{a}^{b} \sqrt{\left(\frac{d x}{d t}\right)^{2}+\left(\frac{d y}{d t}\right)^{2}} d t
We can write this without making reference to the parameterization by employing a non-linear 1-form:
[Arn97] V. I. Arnold. Mathematical Methods of Classical Mechanics. Springer, 1997.
[BT95] Raoul Bott and Loring Tu. Differential Forms in Algebraic Topology. Springer, 1995.
[GP74] Victor Guillemin and Alan Pollack. Differential Topology. Prentice Hall, 1974.
[HH01] John Hubbard and Barbara Hubbard. Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach. Prentice Hall, 2001.
[MTW73] Charles Misner, Kip Thorne, and John Wheeler. Gravitation. W. H. Freeman, 1973.
Index
nn-form
0 -form
1-form
2-form
4-current
algebraic topology arc length
area form
boundary
cell
chain
change of variables formula
charge density closed chain
closed form
contact structure
cover
covering map
critical point
cross product
curl
current density
cylindrical coordinates
DeRham cohomology derivative of differential form of parameterized curve of parameterized surface partial
Descartes, René determinant
differential form
dimension
directional derivative
discrete action
div
dot product
equivalence relation exact form
Faraday
foliation
Fubini's theorem
fundamental domain
Fundamental Theorem of Calculus
Gauss' Divergence Theorem
Gobillion-Vey invariant
grad
gradient
gradient field
Green's theorem
group
Abstract
invariant
kernel
lattice
Legendrian curve level curve
lift
line field
line integral
linear function
manifold
Maxwell
Maxwell's equations
measure zero
multiple integral
open set orientation induced
parameterizationparameterizedcurvelineregionsurfacepartial derivativepartition of unityplane fieldpull-back
rectangluar coordinates
Reeb foliation
Riemann sum
nn
second partial
spherical coordinates
Stokes' theorem classical
generalized
substitution rule
surface area
surface integral
tangent plane tangent space torus
transformation
vector
addition
unit
vector calculus
vector field
volume form
wedge product winding number
z=(x^(2)+y^(2))^(3),z=r^(3),z=(rho sin phi)^(3)z=\left(x^{2}+y^{2}\right)^{3}, z=r^{3}, z=(\rho \sin \phi)^{3}
2.10
phi(u,z)=(u,u,z)\phi(u, z)=(u, u, z)
phi(r,theta)=(r cos theta,r sin theta,r^(2))\phi(r, \theta)=\left(r \cos \theta, r \sin \theta, r^{2}\right)
psi(theta,phi)=(phi sin phi cos theta,phi sin phi sin theta,phi cos phi)\psi(\theta, \phi)=(\phi \sin \phi \cos \theta, \phi \sin \phi \sin \theta, \phi \cos \phi)
psi(theta,phi)=(cos phi sin phi cos theta,cos phi sin phi sin theta,cos phi cos phi)\psi(\theta, \phi)=(\cos \phi \sin \phi \cos \theta, \cos \phi \sin \phi \sin \theta, \cos \phi \cos \phi)
0 <= t <= 1,a <= theta <= b0 \leq t \leq 1, a \leq \theta \leq b 2.23 varphi(r,theta)=(3r cos theta,2r sin theta),0 <= r <= 1,0 <= theta <= 2pi2.23 \varphi(r, \theta)=(3 r \cos \theta, 2 r \sin \theta), 0 \leq r \leq 1,0 \leq \theta \leq 2 \pi
Chapter 4
4.2
-1,4,10-1,4,10
dy=-4dxd y=-4 d x
4.3
3dx3 d x
(1)/(2)dy\frac{1}{2} d y
3dx+(1)/(2)dy3 d x+\frac{1}{2} d y
8dx+6dy8 d x+6 d y
4.5
omega(V_(1))=-8,v(V_(1))=1,omega(V_(2))=-1\omega\left(V_{1}\right)=-8, v\left(V_{1}\right)=1, \omega\left(V_{2}\right)=-1 and v(V_(2))=2v\left(V_{2}\right)=2.
-15
5
4.15-127 4.16c_(1)=-11,c_(2)=44.16 c_{1}=-11, c_{2}=4, and c_(3)=3c_{3}=3
4.17
2dx^^dy2 d x \wedge d y
dx^^(dy+dz)d x \wedge(d y+d z)
dx^^(2dy+dz)d x \wedge(2 d y+d z)
(dx+3dz)^^(dy+4dz)(d x+3 d z) \wedge(d y+4 d z)
4.29252
4.30
-87
-29
5 4.31 dx^^dy^^dz4.31 d x \wedge d y \wedge d z 4.33
z(x-y)dz^^dx+z(x+y)dz^^dyz(x-y) d z \wedge d x+z(x+y) d z \wedge d y
6.3 a omega=(-2x-1)dx^^dy6.3 a \omega=(-2 x-1) d x \wedge d y 6.6 d(x^(2)ydx^^dy+y^(2)zdy^^dz)=06.6 d\left(x^{2} y d x \wedge d y+y^{2} z d y \wedge d z\right)=0 6.7-1,1,16.7-1,1,1
6.11
(-sin x-cos y)dx^^dy(-\sin x-\cos y) d x \wedge d y
(3x^(2)z-2xy)dx^^dy-(x^(3)+1)dy^^dz\left(3 x^{2} z-2 x y\right) d x \wedge d y-\left(x^{3}+1\right) d y \wedge d z
(y^(2)-9z^(8))dx^^dy^^dz\left(y^{2}-9 z^{8}\right) d x \wedge d y \wedge d z
0 6.12(3x^(4)y^(2)-4xy^(6)z)dx^^dy^^dz6.12\left(3 x^{4} y^{2}-4 x y^{6} z\right) d x \wedge d y \wedge d z
6.14
xdyx d y
xdy^^dzx d y \wedge d z
xyzx y z
xy^(2)z^(2)x y^{2} z^{2}
sin(xy^(2))dx+sin(xy^(2))dy\sin \left(x y^{2}\right) d x+\sin \left(x y^{2}\right) d y